STAC Summit, 17 Oct 2019, NYC

STAC Summits

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.
Come to hear leading ideas and exchange views with your peers.

WHERE
New York Marriott Marquis, 1535 Broadway, New York
Astor Ballroom

Agenda

Click on the session titles to view the slides and videos.

 

Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data

 


Deep Learning at Scale with PyTorchBig Data   Big Compute   
 

As financial firms start to use Deep Learning (DL), many of them run into the challenge of deploying DL models at scale. PyTorch, an open source machine learning platform, is designed to support rapid development and seamless, reliable deployment of models at scale. A key principle of this design is to allow researchers to work in the programming paradigm they're used to, rather than requiring them to shift paradigms for the sake of scalability. In this talk, Jeff Smith, the Senior Engineering Manager supporting the PyTorch team at Facebook AI, will explain its design philosophy and crucial innovations, and how it enables Facebook to run over 400 trillion DL predictions per day.

Enabling ML in an enterprise production platformBig Data   Big Compute   
 

Achieving end-to-end efficiency and consistency in analytics pipelines is very difficult. It's even more difficult when the financial stakes are high, models must comply with regulations, or the number of developers is large. JPMC faces all of these challenges with its enterprise-scale risk, trade management, and analytics platform called Athena. Over time, Athena has evolved to over 35 million lines of Python code, thousands of developers, and 10-15,000 production changes per week in business-critical areas such as risk, analytics and pricing. Today the proliferation of machine learning within the bank is raising the bar in many ways, including a higher rate of package churn, ever more data scientists and quant traders writing their own code, and new demands for hardware acceleration. As the leader of the team extending Athena's ML capabilities, Misha will share the key elements that make the platform succeed and discuss how his team is tackling the next phase of its evolution.

Innovation RoundupBig Data   Big Compute   
  “FPGAs accelerating AI for financial services”
    Mutema Pittman, Director of Enterprise Business Division, Programmable Solutions Group, Intel
  "Simplifying Deep Learning Infrastructure with Dell EMC"
    Boni Bruno, Chief Solutions Architect, Dell EMC
  "Xilinx Alveo, Vitis and Quantitative Finance Library "
    Rajiv Jain, Director – Data Center Group, Xilinx
  "No More Tiers: Radical Flash Savings to Redefine AI and Market Data Storage Infrastructure"
    Jeff Denworth, VP, Products and Marketing, VAST Data
  "Chaos and Pain in Machine Learning, and the ‘DevOps for ML Manifesto’"
    Mark Coleman, VP Product & Marketing, Dotscience

 

MLOps - A familiar but strange endeavorBig Data   Big Compute   
 

MLOps refers to a collaborative approach between data scientists and technologists for managing the lifecycle of machine learning models, from training through deployment to monitoring, refinement, and retirement. Managing analytic models is not a new challenge in finance, but MLOps introduces considerable new complexities. Why is that? How do the challenges differ by type of firm (e.g., large, regulated institution vs small unregulated)? What constitutes best practices for managing data, models, and research histories in different business environments? Which pain points can new technology help with? What is required from technical leaders and the organizations they manage? Our panel of experts will weigh in.

STAC Update: Big computeBig Compute   
 

Michel will discuss the latest research and activities in compute-intensive workloads such as deep learning and derivatives risk.

Why a single C++ API makes sense for heterogeneous compute infrastructureFast DataBig Data   Fast Compute   Big Compute   
 

The future of computing in finance certainly seems heterogeneous. It’s a fair bet that in the coming years, optimizing the latency, throughput, and cost efficiency of a given workload will increasingly require some combination of scalar (CPU), vector (GPU), matrix (AI) and spatial (FPGA) processors. These architectures require an efficient software programming model to deliver performance. As we often discuss at STAC, high-level languages like Python or frameworks like Spark make it relatively easy to deal with this diversity, since they allow for highly optimized platform-specific libraries under the covers. But what about programs written in C++? Many performance-obsessed programmers prefer C++ because it provides the greatest exposure to the capabilities of underlying hardware. With that exposure, however, comes a requirement to code to the specifics of the hardware, making coding difficult and non-portable. Furthermore, attempts to program FPGAs in C++ have historically suffered in terms of performance. In short, no one has yet come up with a market-winning answer to the tension between performance, portability, and ease of use. However, as a provider of all of the processor types above, Intel has developed a point of view on the best approach to these challenges. As a senior technologist in Intel’s compiler team, JD will articulate that point of view as well as outline how Intel is putting it into practice through its OneAPI initiative (including architecture, tooling, and development status).

STAC Update: Big dataBig Data   
 

Michel will discuss the latest research and activities in data-intensive workloads such as tick analytics and backtesting.

Innovation RoundupBig Data   
  "NVMe-oF for High Frequency Trading"
    VR Satish, Co-Founder and CTO, Pavilion Data Systems
  "New optimization strategies for in-memory analytics using Optane persistent memory"
    Glenn Wright, Systems Architect, Kx Systems
  "Enabling Low Latency Market Data Applications with Storage Class Memory"
    Brett Miller, Sr Solutions Architect, MemVerge
  "Simpler historical updates management"
    Edouard Alligand, CEO, QuasarDB
  "Scale-in Software for Capital Markets Computing"
    Matt Meinel, SVP of Sales, Business Development and Solutions Architecture, Levyx
  "Unlimited Linear Scalable Performance - Try That With Your Box"
    Bjorn Koelbeck, Co-founder/CEO, Quobyte

 

Drinking from the firehose: streaming ingest benchmarksFast Data   Big Data   
 

Most of the fast data that flows through a financial organization winds up as big data. That is, it's captured in a database somewhere for analysis, either immediately or later. But the process of ingesting high-volume streaming data and making it available through visualizations or query interfaces is challenging and getting more so. This session will examine empirical data from two examples in this problem domain. First, Peter will present a benchmarking project on a visualization system designed specifically for realtime streaming data. Then he and Edouard will present a prototype of database ingest tests using event-driven datastreams, which will be proposed for consideration by the STAC-M3 Working Group.

STAC Update: Fast DataFast Data
 

Peter will discuss the latest research and activities in latency-sensitive workloads such as tick-to-trade processing.

Innovation RoundupFast Data   Fast Compute   
  "Exegy Xero – Setting a New Benchmark for Tick-to-Trade Speed"
    Jason White, Vice President - Product Management, Exegy
  "Cracking the code on FPGA: How Enyx is making hardware performance more accessible."
    Laurent de Barry, Founder & Managing Director, Enyx
  “NBBO and ETF Calculators”
    Pierre Gardrat, VP Engineering, NovaSparks
  "Winning the Race: Ultra-Low Latency with LDA"
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies
  "Do you really know what happens inside your FPGA?"
    Frederic Leens, CEO, Exostiv Labs

 

How hard could it be? Understanding network traffic at the picosecond level Fast Data   
 

The proliferation of double-digit nanosecond (FPGA based) trading systems is forcing firms to measure things at finer and finer accuracies. Several vendors now offer sub-nanosecond or “picosecond-scale” network measurement technologies. Firms that make use of such technologies need to consider what other changes, if any, they need to make to their measurement infrastructure as a result. Is it feasible to simply “drop in” picosecond scale network measurements, or are fundamental changes in thinking required? In this session, Matthew will offer theoretical and practical viewpoints on the implications of picosecond-scale network measurement techniques. To illustrate these, he will refer to Exablaze’s work with STAC to “upgrade” certain STAC benchmarks to accuracies better than a nanosecond.

Innovation RoundupFast Data   
  "Mellanox: Performance, Innovation, Reliability, Openness"
    Lior Paster, Director, Business Development and Sales, Mellanox
  "Arista financial trading innovations"
    Ciaran Kennedy, Global CSO, Arista
  "In production: better than 100 nanos accuracy with NTP and PTP, fault tolerance, and forensic traceability."
    Victor Yodaiken, CEO, FSMTime

 

Democratizing time sync to level the playing fieldFast Data   Fast Compute   
 

How can exchanges ensure fairer execution? How can they improve the simultaneity of market data receipt? How can liquidity takers reduce what they give up on multi-venue trades to market makers with faster pipes? And how can any of this be done without huge investments in infrastructure? According to Balaji, the answer to all these questions starts in one place: highly accurate software-based time synchronization. He claims that accurate time sync deployed at scale can transform an unpredictable market into a nearly perfect FIFO machine, even if that market is built upon extremely jittery infrastructure. In this talk, Balaji will back up his claim with a demonstration. By activating time sync in a simulated market running across several dozen (low-end) public cloud VMs, he will attempt to show that the market behaves as if it had deterministic and equal latency throughout. How well will he do? Come to find out and debate the implications.