STAC Summit, 18 Jun 2019, Chicago

STAC Summits

Add to calendar:    Google    Outlook    ical    Yahoo

STAC Summits bring together industry leaders in ML/DL engineering, data engineering,
system architecture, application development, infrastructure engineering, and
operational intelligence to discuss important technical challenges in the finance industry.
Come to hear leading ideas and exchange views with your peers.

Tuesday, 18 June 2019
STAC Exchange (exhibits) opens at 8:00am
Meeting starts at 8:30am
Networking Lunch at ~12:00pm
Conference concludes ~4:00pm
Reception immediately following.

The Metropolitan Club
Willis Tower
233 South Wacker Drive, 66th Floor, Chicago

Important Information:
Business Casual Attire is required by the Club.

Please allow extra time to arrive at the Willis Tower.
At the lobby, present your photo ID and let the receptionist know
that you are attending the STAC Summit at the Metropolitan Club.

Partial Agenda (check back for updates!)


Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data


STAC Update: AI WorkloadsBig Data   Big Compute   
  • Dr Michel Debiche, Director of Analytics Research, STAC

Michel will discuss progress toward benchmark standards for ML/DL techniques and technologies based on problems that finance firms in the STAC Benchmark Council have said they care about.

Panel: AI and the engineerBig Data   Big Compute   
  • Speakers to be announced

Financial firms are increasingly using machine learning or deep learning (broadly, “AI”) to improve outcomes or automate functions. Like many other key components of a business plan, AI is a team sport. A brilliant researcher can't be productive without a lot of help with data, technology, and process management. Fortunately, many of these skills exist already in the finance industry, whether it is managing large amounts of data, designing systems for high performance, or managing uncertain processes in an agile way. What are the key engineering and development skills needed on effective data science teams? What technologies are most crucial to understand today and in the near future? How much data science knowledge does a technologist need in order to be a highly effective data engineer or ML engineer? Which parts of the process are most likely to be automated, purchased, or outsourced in the near future, and which parts will continue to rely on in-house (human) technologists? Our panel will discuss.

Innovation RoundupBig Data   Big Compute   
  "Beyond Low Latency Market Data – Real-Time Signals-as-a-Service"
    David Taylor, Chief Technology Officer, Exegy
  "Taming Monte Carlo with Intel PSG libraries"
    Stephen Weston, Numerical Libraries Architect, Intel
  "Effective, lossless, and fast compression for timeseries"
    Edouard Alligand, CEO, QuasarDB
  "Levyx: System Software for Computational Storage in Fintech"
    Matt Meinel, SVP Sales, Bus Dev & Solutions Architecture, Levyx




STAC Update: Big WorkloadsBig Data   Big Compute   
  • Dr Michel Debiche, Director of Analytics Research, STAC

A lot of innovative solutions for big data and big compute workloads like enterprise tick analytics, strategy backtesting, and market risk are being subjected to STAC Benchmarks. If he can catch his breath, Michel will take us through the latest research.

Innovation RoundupBig Data   Big Compute   
  "Managing NVMe Storage at Rack-Scale"
    V.R. Satish, Pavilion Data Systems
  "A Storage Architecture to Accelerate kdb+"
    Shimon Ben-David, Director of Sales Engineering & Support, Weka.IO
  “What if you don’t have to move data?”
    Steve Wallo, VP Sales Engineering, Vcinity
  "Memorize the Future"
    Charles Fan, Co-founder and CEO, MemVerge
  "How the quest for profits led to the quest for scale"
    Keith Manthey, Global CTO, UDS, Dell EMC
  "Billions of files, PB of data, what do I do now?"
    Christian Smith, Vice President, Product, Igneous


Compute, meet Data: Accelerating I/O at the micro and macro levelBig Data   Big Compute   
  • Speakers to be announced

Ever-increasing demands from the business, the growth of new AI workloads, and the increasing pull of public clouds call for faster and more flexible ways to get data where it’s needed in computations. Fortunately, the vendor community is competing and collaborating to solve these problems. Innovations include standards like NVMe and NVMe over Fabrics, persistent memory with interesting properties, new and enhanced distributed file systems, rapidly evolving cloud services, and products that claim to substantially mitigate problems we expect when compute and storage are separated by long distances. Sorting through this Cambrian explosion of offerings is a good problem to have, but a problem nonetheless. How should application developers think about memory and storage today? What’s possible with file, block, and object storage? When is public cloud a realistic option, and when not? What are the promise and perils of disaggregated storage? Our panel of experts will debate.

Networking Luncheon


How to program directly to persistent memoryBig Data   Big Compute   
  • Ken Gibson, Director of Persistent Memory Software Architecture, Intel

Persistent memory or PMEM (which we've often referred to as storage class memory), is a new technology in the memory-storage hierarchy that combines the persistence of storage with the byte-level access and memory bus usage of DRAM. PMEM vendors promise users greater capacity, availability, and data protection than traditonal DRAM, with greater performance and endurance than traditonal SSDs. In this talk, Ken will give an overview of how operating systems expose PMEM to applications and how to program directly to it using the open source Persistent Memory Developer Kit (PMDK) APIs bundled in Linux and Windows. Along the way, he will give examples of how some popular open source databases have been modified for PMEM.

STAC Update: Fast DataFast Data
  • Peter Lankford, Founder & Director, STAC

Peter will discuss the latest research and Council activities related to low-latency/high-throughput realtime workloads.

Innovation Roundupfast Data   Fast Compute   
  "Hardware Algos Made Easy: Deploy your trading strategies on FPGAs with the Enyx HLS Framework"
    Laurent de Barry, Founder & Managing Director, Enyx
    Davor Frank, Director, WW Sales Engineering, Solarflare
  "Novasparks update, Tick To Trade: lets talk Numbers !"
    Cliff Maddox,Director of Sales, NovaSparks




Innovation Roundupfast Data   
  "New Time Services Available from Orolia"
    Sadie Nedo, Sales Manager, Orolia
  "Synchronization (r)evolution in finance markets"
    Francisco Girela López, Senior FPGA engineer, Seven Solutions
  "Endace Visibility Stack"
    Speaker to be announced, Endace


Case study: Overcoming hurdles to cloud-based RTL verification for FPGAFast Data   Fast Compute   
  • David Lidrbauch, Product Marketing Manager, Mentor, A Siemens Business

Public clouds—offering huge economies of scale, access to the latest hardware, copious memory, and fast storage—seem like a great way to improve the scalability, performance, and cost of verifying RTL designs for FPGA. But there are challenges. Cloud infrastructure as a service (IaaS) tends to be optimized for HPC-oriented jobs or multi-tenant, always-on, transactional tasks. According to David, IaaS can stumble on write-heavy tasks, latency-sensitive data transfer, and highly interactive workflows like logic debug. FPGA verification teams invest heavily in on-prem environments to avoid these bottlenecks, and they do not want to risk losing those hard-won advantages in moving to a cloud infrastructure. Yet there is hope. David will describe a real-world case in which the customer, a cloud provider, and other parties collaborated on a semi-custom solution that overcame some of these hurdles. More generally, he'll describe architectures that preserve specific verification environment advantages in a public cloud or hybrid cloud, presenting benchmark data to back up those claims.

Panel: Time sync & capture in 2019Fast Data   Big Data   
  • Speakers to be announced

Maintaining event intelligence in the high-frequency battlefield continues to get more challenging. Understanding events with clarity means the ability to capture those events, understand exactly when they happened, and pull them into meaningful analysis, whether that’s for operations management or strategy development and backtesting. Our panelists will discuss these challenges and opine on the best ways to solve them. What kind of accuracy do trading firms need in the near future, and what kind of accuracy is possible? Are these even separable questions, or do new capabilities create their own demand? What kind of network does high accuracy require? How are 40G and higher bandwidths affecting requirements for capture? How are advances in storage and memory affecting capture architectures? What’s changing and what’s not in the build-vs-buy equation?

Additional Fast Data sessions to be announcedFast Data   Fast Compute   



~5:00pm Networking Reception

About STAC Events & Meetings

STAC meetings bring together industry leaders to focus on challenging areas of financial technology. They range from large in-person gatherings (STAC Summits) to webinars and working group teleconferences. 

Event Registration