STAC Summit, 15 May 2019, London

STAC Summits

STAC Summits bring together industry leaders in ML/DL engineering, data engineering,
system architecture, application development, infrastructure engineering, and
operational intelligence to discuss important technical challenges in the finance industry.
Come to hear leading ideas and exchange views with your peers.

WHERE
Leonardo Royal Hotel London City (former Grange City Hotel)
8-14 Coopers Row, London EC3N 2BQ


Agenda

Click on the session titles to view the slides.

 

Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data

 


STAC Update: AI WorkloadsBig Data   Big Compute   
 

Michel will discuss progress toward benchmark standards for ML/DL techniques and technologies based on problems that finance firms in the STAC Benchmark Council have said they care about.

Panel: AI and the engineerBig Data   Big Compute   
 

Financial firms are increasingly using machine learning or deep learning (broadly, “AI”) to improve outcomes or automate functions. Like many other key components of a business plan, AI is a team sport. A brilliant researcher can't be productive without a lot of help with data, technology, and process management. Fortunately, many of these skills exist already in the finance industry, whether it is managing large amounts of data, designing systems for high performance, or managing uncertain processes in an agile way. What are the key engineering and development skills needed on effective data science teams? What technologies are most crucial to understand today and in the near future? How much data science knowledge does a technologist need in order to be a highly effective data engineer or ML engineer? Which parts of the process are most likely to be automated, purchased, or outsourced in the near future, and which parts will continue to rely on in-house (human) technologists? Our panel will debate.

Innovation RoundupFast Compute   Big Data   Big Compute   
  "Bigstream: The Autonomous Data Platform for Accelerated AI"
    Roop Ganguly, Chief Solutions Architect, Bigstream
  "Beyond Low Latency Market Data – Real-Time Signals-as-a-Service"
    David Taylor, Chief Technology Officer, Exegy
  "YellowDog for Financial Services: A new breed of cloud partner"
    Simon Ponsford, CTO, YellowDog
  "AI is hot. Here’s how to cool it."
    Dave Weber, Wall Street CTO & Director, Lenovo

 

Tips on delivering agile AIBig Data   Big Compute   
 

Delivering end-to-end AI solutions is a complex, multi-step process. Studies show that many firms struggle with one or more of these steps. The challenges increase in heavily regulated environments like financial services. Kortical is a startup that has built an AI-as-a-service platform from which it serves enterprises in multiple industries. In this talk, Alex will walk through the process of delivering effective AI once data has been prepared—from feature engineering, through model training, tuning, and backtesting, to deployment and life-cycle management of models as APIs—highlighting some of the key challenges and providing tips to overcome them.

Innovation RoundupBig Data   Big Compute   
  "Taming Monte Carlo with Intel PSG libraries"
    Dr. Stephen Weston, Numerical Libraries Architect, Intel
  "Effective, lossless, and fast compression for timeseries"
    Jean-Claude Tagger, COO, QuasarDB
  "Levyx: System Software for Computational Storage in Fintech"
    Matt Meinel, SVP Sales, Bus Dev & Solutions Architecture, Levyx

 

Grokking the TCO of high-performance systemsBig Data   Big Compute   
 

When studying the ROI of new technologies, firms pay great attention to the "R", such as: how much a new architecture will reduce time-to-market for our models/enable us to meet new regulatory requirements/improve our data scientist productivity/etc. But quantifying ROI also requires truly understanding the "I". This may sound simple, but it can be complex for high-performance systems such as HPC grids, machine learning clusters, or big data farms. The total cost of ownership (TCO) of such systems involves many technical and human elements, even if the system is based in a public cloud. At NAG, Andrew spends considerable time advising organizations on how to model the TCO of potential investments in order to guide decisions. In this talk, he will propose a TCO calculator and take audience feedback on it with respect to different use cases important to financial firms.

STAC Update: Big WorkloadsBig Data   Big Compute   
 

A lot of innovative solutions for big data and big compute workloads like enterprise tick analytics, strategy backtesting, and market risk are being subjected to STAC Benchmarks. If he can catch his breath, Michel will take us through the latest research.

Innovation RoundupBig Data   Big Compute   
  "How the quest for profits led to the quest for scale"
    Tim O’Callaghan, UDS EMEA Advisory Sales Engineer, Dell Technologies
  "A Storage Architecture to Accelerate kdb+"
    Derek Burke, Regional Sales Manager, Weka.IO
  "Managing NVMe Storage at Rack-Scale"
    Gurpreet Singh, CEO, Pavilion Data Systems
  “What if you don’t have to move data?”
    Steve Wallo, CTO, Vcinity

 

Compute, meet Data: Accelerating I/O at the micro and macro levelBig Data   
 

Ever-increasing demands from the business, the growth of new AI workloads, and the increasing pull of public clouds call for faster and more flexible ways to get data where it’s needed in computations. Fortunately, the vendor community is competing and collaborating to solve these problems. Innovations include standards like NVMe and NVMe over Fabrics, persistent memory with interesting properties, new and enhanced distributed file systems, rapidly evolving cloud services, and products that claim to substantially mitigate problems we expect when compute and storage are separated by long distances. Sorting through this Cambrian explosion of offerings is a good problem to have, but a problem nonetheless. How should application developers think about memory and storage today? What’s possible with file, block, and object storage? When is public cloud a realistic option, and when not? What are the promise and perils of disaggregated storage? Our panel of experts will debate.

STAC Update: Fast DataFast Data
 

Peter will discuss the latest research and Council activities related to low-latency/high-throughput realtime workloads.

Innovation RoundupFast Data   Fast Compute   
  "X-Rated"
    David Riddoch, Chief Architect, Solarflare Communications
  "Cutting-Edge Performance, Lowest latency, Extreme Reliability: Too good to be true?"
    Nick Rogers, CEO, BlackCore
  "Hardware Algos Made Easy: Deploy your trading strategies on FPGAs with the Enyx HLS Framework"
    Laurent de Barry, Founder & Managing Director, Enyx
  "Novasparks update, Tick To Trade: lets talk Numbers !"
    Yves Charles, VP Business Development, NovaSparks

 

Innovation RoundupFast Data   
  "Synchronization (r)evolution in finance markets"
    Cesar Prados, CTO/Managing Director, Seven Solutions
  "Resilient Timing and Cybersecurity for Critical Financial Systems"
    Jean-Arnold Chenilleau, Senior Applications Engineer, Orolia Enterprises
  "Endace Visibility Stack"
    James Barrett, Senior Director EMEA Sales, Endace

 

A faster and fairer financial cloudFast Data   
 

As financial services firms rapidly increase their use of public clouds for large batch computation, another swathe of use cases—realtime trading—lies in wait. If major markets and their participants could start trading in general purpose public clouds, the promise is both lower cost colocation and the ability to integrate the fast loop of live trading with cloud-scale analysis for faster adaptation of algorithms. But these realtime workflows require low latency, high throughput, and determinism. Some ingredients are emerging in the cloud, such as bare metal instances and realtime market data feeds, but a missing link has been accurate timestamping. Even before regulations like MiFID 2 and the CAT NMS plan began dictating accuracy requirements, trading firms and venues have needed highly accurate timestamps to understand, prove, and improve what happens in their realtime systems. Dan will argue that machine learning-based synchronization algorithms can make such accuracy—even nanosecond accuracy—feasible at cloud-scale. He will also argue that in cases where fairness is a key requirement, such as exchanges or internal matching, the real answer is not deterministic latency but rather highly accurate timestamps that can give commodity, best-efforts networks the effective determinism of a high-performance network.

Innovation RoundupBig Data   Big Compute   
  "News and updates from Exablaze "
    Dr Matthew Grosvenor, VP Technology, Exablaze
  "Challenges for packet analytics in high speed networks and public cloud infrastructure. "
    Matt Davey, Director of Product Management, Corvil
  "Arista innovations for finance "
    Dr David Snowdon, Director of Engineering, Arista
  "’cburst’ - practical continuous real time analysis of data-streams utilization at millisecond resolution"
    Nadeem Zahid, VP Product Management, cPacket

 

Panel: State of the art in high-speed captureBig Data   Fast Data   
 

Wire capture is a crucial element of trading systems today, with the resulting data serving as a source not only for operational monitoring but also for the development and backtesting of trading strategies. And like other parts of a trading system, capture architectures can't stand still. In an interactive session with the audience, our panelists will debate key questions like: how are 40G and higher bandwidths affecting requirements? how is the end-to-end workflow evolving, from timestamping and aggregation to capture, replication/distribution, and analysis? how are people balancing speed of capture and speed of retrieval? what impact will recent advances in storage and memory have? what’s changing in the monitoring industry and in the build-vs-buy equation?