STAC Summit, 24 Sep 2019, Hong Kong

STAC Summits

Principal Sponsor
HKEX

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.





Agenda

Click on the session titles to view the slides.

 

Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data

 


Opening remarks
 

Richard will offer words of welcome from HKEX.

STAC orientation
 

The STAC Benchmark Council’s mission is to accelerate technology discovery and assessment in the finance industry. Peter will outline how that works and how trading and investment firms can benefit.

Engineering to support modern data scienceBig Data   Big Compute   
 

Nearly every institution in today’s market wants to improve its data science—whether it’s an HFT shop or discretionary asset manager that wants to diversify strategies, a well-established quant fund seeking to get algorithms to market faster, a broker looking to provide better execution, or an exchange aiming to add value or improve surveillance through better analytics. Many of them are trying their best to hire smart data scientists and to source alternative data. Meanwhile, vendors and the open source community are flooding the world with helpful tools and technologies, including AI software frameworks (e.g., TensorFlow, PyTorch, Scikit-learn), big data scaling frameworks (e.g., Spark, Dask, HPAT), model life-cycle management tools, cloud services (IaaS, PaaS, MLaaS), processors (CPU, GPU, FPGA, TPU, other AI chips), and data infrastructures (databases, file systems, memory, and storage architectures). But expecting data scientists to utilize new data and technologies on their own usually fails, at least at scale. Data scientists need help selecting the right technologies, prepping and managing large amounts of data, designing systems for high performance, and managing uncertain processes in an agile way. That is, they need the help of engineers. Our cross-functional panel of experts will tackle key questions facing the CTO, such as:

  • What are the key engineering and development skills needed on effective data science teams?
  • What are the big technical challenges in model training today, and what are the best solutions?
  • How about the same questions for model backtesting?
  • Should we move data to the compute or compute to the data?
  • What role can public cloud play, and what’s best for on-premises infrastructure?
  • What are the most critical technical choices to get right, and which ones are more forgiving? How can a firm hedge its technical bets?

STAC briefing: Quant technology activitiesBig Data   Big Compute   
 

For nearly a decade, STAC working groups have discussed challenges in an expanding range of big data and big compute workloads such as enterprise tick analytics, strategy backtesting, derivatives valuation, and machine learning/deep learning. Along the way, they have developed numerous benchmark specifications from use cases provided by trading and investment firms. These benchmarks are used to assess new technologies up and down the stack—including software like tick databases, Spark, and Python scaling frameworks; AI frameworks; public cloud platforms and Kubernetes; CPUs, GPUs, and FPGA; parallel file systems; storage systems; and storage media including the latest SSDs and storage class memory. Michel will summarize the activities of these working groups, review the major benchmark suites, and provide the latest benchmark results.

Innovation Roundup - Round 1Big Data   Big Compute   
 

The Innovation Roundup is a time-honored STAC format for several vendors to introduce new technologies in a short amount of time, ensuring that they “get to the point”.

  "AI is hot. Here’s how to cool it"
    Dave Weber, Wall Street CTO & Director, Lenovo
  "Storage Solution for Scaling AI"
    Frances Chien, Corporate Advisory Engineer, Dell EMC
  "Levyx: System Software for Computational Storage in Fintech"
    Charmaine Athaide, Regional Business Development Director, USAM Group (speaking for Levyx)
  "Weaponizing I.T. in Finance"
   Paul Serrano, Chief Evangelist, APJ, Nutanix
Everything you wanted to know about FPGA but were afraid to askFast Data   Big Data   Fast Compute   Big Compute   
 

Field-programmable gate arrays (FPGAs) enable business logic to be coded in firmware on platforms with massive parallelism and low-latency I/O. In finance, FPGAs are commonly used for latency-sensitive tasks in the handling of market data and transactions, and they are beginning to be used for compute-intensive tasks like risk calculations. Our panel of users and vendors will tackle key questions on the minds of trading and investment firms thinking about moving some workloads to FPGA for the first time or improving their existing use of FPGA, including:

  • Technical strengths and weaknesses of the underlying platforms
  • What kind of latencies are achievable on FPGA today?
  • What are the most suitable workloads in computational finance and what are the benefits?
  • What are the DevOps challenges with FPGA and how are firms addressing those?
  • Considering the state of vendor offerings, what part of an FPGA solution should a firm buy and what should it build, depending on the use case?

 

Innovation Roundup - Round 2Fast Data   
  "Accelerating Financial Workloads with Intel FPGA libraries"
    David (Qi) Huang, Engineering Manager, Intel
  "Think fast, go fast"
    Matthew Grosvenor, VP Technology, Exablaze
  "From Build to Buy: A full range of FPGA solutions"
    Laurent de Barry, Founder & Managing Director, Enyx
  "FPGA for pre-trade risk; a perfect balance of speed and flexibility"
    Richard Man, Sales Director - Asia Pacific, Fixnetix
  "Pure FPGA based Feed Handlers and Tick-to-Trade Solutions"
    Cliff Maddox, Director of Sales, NovaSparks
  "Winning the Race: Ultra-Low Latency with LDA"
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies

 

STAC briefing: Fast workloadsFast Data   
 

Since the founding of the STAC Benchmark Council in 2007, working groups have discussed challenges related to “fast data”. This starts with the challenge of how to reduce the latency of realtime market data, messaging, and trade execution, but it also includes operational intelligence challenges such as time synchronization, event capture, and latency measurement. Accordingly, the first task of these working groups has been to develop benchmark standards for low-latency technology stacks as well as technologies for time synchronization, time-stamping, and capture. These benchmarks have been used to measure overclocked servers, network stacks (e.g., 10GbE, 25GbE, Infiniband), FPGA solutions for tick-to-trade, ultra-accurate timestamping switches and NICs, and even containers with Kubernetes. Peter will summarize the activities of the working groups, review the major benchmark suites, and provide the latest benchmark results.

Innovation Roundup - Round 3Fast Data   
  "Arista financial trading innovations"
    Dave Snowdon, Director of Engineering, Arista Networks
  "Integrating high-accuracy synchronization into your financial operations"
   Carlos Valenzuela, Field Application Engineer, Seven Solutions
  ”A glimpse of the future of accurate UTC sync and timing for distributed systems”
    Barry Dropping, Associate Director, Product Line Management, Microchip

 

It’s about time: Keeping trading in sync across AsiaFast Data   
 

Analyzing events in electronic trading requires understanding exactly when they happened. In today's markets, that is not easy. Challenges include coordinating time references across data centers, applying timestamps in all the necessary applications and network devices, and achieving sufficient timestamp accuracy in a game with moving goal posts. Our panel of users and vendors will discuss the business and regulatory drivers for accurate time synchronization in Asia, then debate key questions such as:

  • What are the pros and cons of different geographic time distribution strategies such as GNSS, fiber-delivered time, and synchronization over an internal WAN?
  • What are the best tradeoffs between accuracy, scaling, and cost that can be achieved with protocols like PPS, PTP, NTP, White Rabbit, and Huygens?
  • What about software versus hardware timestamping?
  • What kind of accuracies are being achieved in the US and Europe, and what is necessary in Asia? (Why is STAC now citing results in picoseconds?)
  • How can we measure our time accuracy? A basic principle is that to measure the accuracy of one thing, you need something more accurate with which to measure it.

 

About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.

Sponsors

 

PLATINUM SPONSORS



GOLD SPONSORS






MEDIA PARTNERS


Exhibitors