STAC Summit, 24 Sep 2019, Hong Kong

STAC Summits

Hosted by
HKEX

Add to calendar:    Google    Outlook    ical    Yahoo

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment. Come to hear leading ideas and exchange views with your peers. If you have ideas for the agenda, please contact us.





STAC Summits are famous for their depth and density. The carefully curated agendas go deep into important technical topics but still cover a lot of ground. The inaugural STAC Summit in Asia will stay true to this tradition. It will dive into a few key issues in quant and trade-flow technology, while offering data and opinions across many more topics.

We are currently lining up the expert speakers for this event. Some are listed to the right. More will be announced soon.


Partial Agenda
(Session speakers to be announced. Agenda subject to change.)

 

Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data

 


11:30am   Doors Open


 

12:00pm   Welcome Lunch


 

1:00pm Opening remarks
 

As our host for the event, Richard will offer words of welcome.

    STAC orientation
 

The STAC Benchmark Council’s mission is to accelerate technology discovery and assessment in the finance industry. Peter will outline how that works and how trading and investment firms can benefit.

    Panel: Engineering to support modern data scienceBig Data   Big Compute   
 

Nearly every institution in today’s market wants to improve its data science—whether it’s an HFT shop or discretionary asset manager that wants to diversify strategies, a well-established quant fund seeking to get algorithms to market faster, a broker looking to provide better execution, or an exchange aiming to add value or improve surveillance through better analytics. Many of them are trying their best to hire smart data scientists and to source alternative data. Meanwhile, vendors and the open source community are flooding the world with helpful tools and technologies, including AI software frameworks (e.g., TensorFlow, PyTorch, Scikit-learn), big data scaling frameworks (e.g., Spark, Dask, HPAT), model life-cycle management tools, cloud services (IaaS, PaaS, MLaaS), processors (CPU, GPU, FPGA, TPU, other AI chips), and data infrastructures (databases, file systems, memory, and storage architectures). But expecting data scientists to utilize new data and technologies on their own usually fails, at least at scale. Data scientists need help selecting the right technologies, prepping and managing large amounts of data, designing systems for high performance, and managing uncertain processes in an agile way. That is, they need the help of engineers. Our cross-functional panel of experts will tackle key questions facing the CTO, such as:

  • What are the key engineering and development skills needed on effective data science teams?
  • What are the big technical challenges in model training today, and what are the best solutions?
  • How about the same questions for model backtesting?
  • Should we move data to the compute or compute to the data?
  • What role can public cloud play, and what’s best for on-premises infrastructure?
  • What are the most critical technical choices to get right, and which ones are more forgiving? How can a firm hedge its technical bets?

    STAC briefing: Quant technology activitiesBig Data   Big Compute   
 

For nearly a decade, STAC working groups have discussed challenges in an expanding range of big data and big compute workloads such as enterprise tick analytics, strategy backtesting, derivatives valuation, and machine learning/deep learning. Along the way, they have developed numerous benchmark specifications from use cases provided by trading and investment firms. These benchmarks are used to assess new technologies up and down the stack—including software like tick databases, Spark, and Python scaling frameworks; AI frameworks; public cloud platforms and Kubernetes; CPUs, GPUs, and FPGA; parallel file systems; storage systems; and storage media including the latest SSDs and storage class memory. Michel will summarize the activities of these working groups, review the major benchmark suites, and provide the latest benchmark results.

    Innovation Roundup (quant technology related)Big Data   Big Compute   
 

The Innovation Roundup is a time-honored STAC format for several vendors to introduce new technologies in a short amount of time, ensuring that they “get to the point”. In this first Innovation Roundup of the day, presentations will focus on innovations that promise to reduce time to market of new algorithms, improve the response time of analytics, or reduce the cost of handing big workloads.

~3:00pm   Break


 

    Panel: Everything you wanted to know about FPGA but were afraid to askFast Data   Big Data   Fast Compute   Big Compute   
 

Field-programmable gate arrays (FPGAs) enable business logic to be coded in firmware on platforms with massive parallelism and low-latency I/O. In finance, FPGAs are commonly used for latency-sensitive tasks in the handling of market data and transactions, and they are beginning to be used for compute-intensive tasks like risk calculations. Our panel of users and vendors will tackle key questions on the minds of trading and investment firms thinking about moving some workloads to FPGA for the first time or improving their existing use of FPGA, including:

  • Technical strengths and weaknesses of the underlying platforms
  • What kind of latencies are achievable on FPGA today?
  • What are the most suitable workloads in computational finance and what are the benefits?
  • What are the DevOps challenges with FPGA and how are firms addressing those?
  • Considering the state of vendor offerings, what part of an FPGA solution should a firm buy and what should it build, depending on the use case?

    STAC briefing: Fast workloadsFast Data   
 

Since the founding of the STAC Benchmark Council in 2007, working groups have discussed challenges related to “fast data”. This starts with the challenge of how to reduce the latency of realtime market data, messaging, and trade execution, but it also includes operational intelligence challenges such as time synchronization, event capture, and latency measurement. Accordingly, the first task of these working groups has been to develop benchmark standards for low-latency technology stacks as well as technologies for time synchronization, time-stamping, and capture. These benchmarks have been used to measure overclocked servers, network stacks (e.g., 10GbE, 25GbE, Infiniband), FPGA solutions for tick-to-trade, ultra-accurate timestamping switches and NICs, and even containers with Kubernetes. Peter will summarize the activities of the working groups, review the major benchmark suites, and provide the latest benchmark results.

    Innovation Roundup (fast data related)Fast Data   
 

Presentations in the second Innovation Roundup of the day will focus on innovations that promise to improve latency, realtime throughput, time synchronization, or capture.

    Panel: It’s about timeFast Data   
 

Analyzing events in electronic trading requires understanding exactly when they happened. In today's markets, that is not easy. Challenges include coordinating time references across data centers, applying timestamps in all the necessary applications and network devices, and achieving sufficient timestamp accuracy in a game with moving goal posts. Our panel of users and vendors will discuss the business and regulatory drivers for accurate time synchronization in Asia, then debate key questions such as:

  • What are the pros and cons of different geographic time distribution strategies such as GNSS, fiber-delivered time, and synchronization over an internal WAN?
  • What are the best tradeoffs between accuracy, scaling, and cost that can be achieved with protocols like PPS, PTP, NTP, White Rabbit, and Huygens?
  • What about software versus hardware timestamping?
  • What kind of accuracies are being achieved in the US and Europe, and what is necessary in Asia? (Why is STAC now citing results in picoseconds?)
  • How can we measure our time accuracy? A basic principle is that to measure the accuracy of one thing, you need something more accurate with which to measure it.

~6:30pm   Networking Reception

 

~8:00pm   Event Concludes

 

About STAC Events & Meetings

STAC meetings bring together industry leaders to focus on challenging areas of financial technology. They range from large in-person gatherings (STAC Summits) to webinars and working group teleconferences. 

Event Registration

Sponsors

 

PLATINUM SPONSORS



GOLD SPONSORS





Exhibitors