STAC Summit, 30 Nov 2022, London

STAC Summits

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.

Leonardo Royal Hotel
London City
8-14 Coopers Row, London (East) EC3N 2BQ


Click on the session titles to view the slides.

STAC big workloads updateBig Data   Fast Compute   Big Compute   

Peter will present the most recent results from enterprise tick analytics and market risk.

Innovation RoundupBig Data   Big Compute   
  "Interoperability vs performance: bringing the best of both to kdb+."
   Connor Gervin, Partnerships Engineering Lead, KX
  "Accelerating science and analytics with Options Data Store, Data Platform and CDN"
    James Laming, VP, Global Head of Infrastructure, Options Technology


Beyond lift-and-shift: Optimizing cloud storage for I/O-bound workloadsBig DataBig Compute   

According to many user firms within the STAC community, storage-centric workloads continue to be prime candidates for a cloud or hybrid model and are mostly undergoing a "lift and shift". But once the initial move is complete, how do technologists optimize for performance? What architectures make sense when configuring storage for time series databases and high-performance analytics? Mark will walk us through some real-life examples to answer these questions and more. Along the way, he'll provide information on profiling I/O to help you optimize your cloud-based and hybrid storage systems.

STAC machine learning updateMachine Learning   

Peter will discuss the first vendor-optimized STAC-ML Markets (Inference) results, inference research on different CPU architectures, and updates to inference and training benchmark specifications.

Innovation RoundupBig Compute   Machine Learning   
  "AI in Financial Services"
    Amr El-Ashmawi, VP Vertical Markets, Groq
  "Hopper: NVIDIA's New GPU Architecture & Why It Matters!"
    Tim Wood, Sr. Solution Architect FSI-EMEA, NVIDIA


Optimization strategies for the DL storage stackBig Data   Machine Learning   

Many capital markets technologists have experience optimizing storage architectures for workloads like time-series databases and backtesting. But the training phase of Deep Learning is a different workload, with a complex mix of high-concurrency reads, checkpointing, and random large-block mmaps. The storage stack (client, network, file system, and storage hardware) must cater to these access patterns during initial model training as well as retraining. Otherwise, I/O bottlenecks can leave costly compute resources underutilized and delay model deployment. (Few storage architects want to be responsible for missed business opportunities.) In this talk, James will help you design more efficient AI solutions by exploring what happens during DL training from the storage system's point of view. Using lessons from real-world examples, he'll discuss the implications for the storage stack and walk through optimizations for the file system and the rest of the datapath.

How to improve performance in open source AIMachine Learning   

The primary focus of open source AI software has been functionality and usability. To keep up with users' initial needs, projects have prioritized breadth of coverage, including support for hardware, models, and data-pipeline integrations. But the more AI is used in production environments, the more users' needs are shifting to speed and scale. Declan thinks that performance is critical for the next generation of open source AI projects. After reviewing the current AI landscape and the challenges data scientists and AI engineers often face when they need to speed up and scale up their production pipelines, he will present key improvements the open source community has provided in response. Using currently available projects, he will show you how to exploit these performance innovations while maintaining functionality, through drop-in replacements and just a few lines of code.

What CXL means for FSIFast DataBig Data   Fast Compute   Big Compute   

At STAC two years ago, Dr. Matthew Grosvenor gave a primer on competing interconnect standards and explained why he viewed Compute Express Link (CXL) as the strongest contender from a technical standpoint. Having the support of all the major processor and systems vendors hasn’t hurt the standard, either. Today CXL-enabled solutions are starting to ship. What do you need to know? Graham will dive deep into CXL, explaining how it works and what advantages it provides. He'll also show how CXL interfaces an FPGA with a CPU, impacts the flow of data, and accelerates storage networking and memory-driven applications like realtime data processing and low-latency trading. Bring your questions and join Graham to understand this emerging technology.

STAC fast data updateFast Data

Peter will discuss the latest research and Council activities related to low-latency/high-throughput realtime workloads.

Approaching HF radio with your eyes wide openFast Data   

Microwave communication is highly dependent on short segments and line of sight. The more distance a link requires, the less applicable microwave is. Enter high-frequency (HF) radio, a longer wavelength signal that can work across thousands of kilometers between towers. HF opens up new possibilities for low latency over long hauls. However, HF is not without challenges. The day-night cycle, impacts of the seasons, solar cycles, and sunspots all affect it. How—and how often—do these physical phenomena impact HF service availability and performance? How many hours per day are HF links available? Where are we in the current solar cycle, and what does that mean for the coming years? What improvements are mitigating these issues? What is the impact on operations for HF adopters? Ehud will address these questions and yours in this interactive session.

Innovation RoundupFast Data   
  "BSO: Enter the fast lane of RF networks"
    Michael Bauer, Technical Pre-Sales Director, BSO
  "We are ready for the new SIP"
    Pierre Gardrat, CTO, NovaSparks


Low-latency market integration: Time to rethink buy-vs-build?Big Data   Fast Data   

Every trading firm faces a choice in how to support latency-sensitive trading strategies with data, analytics, and execution in a new market: should we build and operate our own software/firmware, delegate it to a vendor, or do some of both? Once made, the decision often lasts many years. Do recent trends in the vendor landscape warrant a rethink on what to buy or whom to buy from? M&A has brought together companies in low-latency market data, order entry, historical tick data and analytics, hardware development, network connectivity, and operational support. The emerging product combinations claim to offer deeper and broader value than before. How close do the new offerings get to the ideal of plug-and-play market access? What sort of latency sensitivities can they serve? Are vendors’ engagement models changing along with the products? How are solutions satisfying rising demands such as cloud delivery and secure access, and how will they evolve as customer needs change? Join our panelists to add your questions to the mix. To kick off, some of the panelists provided a short presentation:

  "Unleashing the Value of Market Data"
   Rob Lane, Global Head of Business Execution, Refinitiv, an LSEG Business
  "The 5 biggest misconceptions regarding low latency trading technology"
   Jason White, Director of Market Data Solutions, Exegy
  "Transformative Solutions for Capital Markets"
    Peter Lawrey, Founder & CEO, Chronicle Software
STAC FPGA SIG updateFast Compute   Fast Data

Peter will present updates from the FPGA special interest group, including the status of current projects and collaborations between financial firms and vendors.

How to shine a light on full FPGA and ASIC performanceFast Compute   Fast Data   

A straightforward way to design low-latency hardware is by distributing the performance budget across the system's blocks. The design achieves its latency goals if the performance measured in each block meets its budget in all use cases. But this direct approach may not address all problems. What about synchronization between blocks, a saturated communication infrastructure, or the order of the algorithm? Performance issues in these areas can remain in the dark with a block-level approach. Michael will show how to illuminate each layer of the hardware design and find unanticipated bottlenecks. He'll walk through performance measurement methods you can use immediately at the block, subsystem, and system levels to better understand and improve your performance.

Innovation RoundupFast Data   
  "Accelerating Trading Strategies with AMD Next Gen Low Latency Trading Platforms"
    Alastair Richardson, Strategic Business Development Director, Data Center and Communications Group, AMD
  "Latency & Throughput - Your NIC Can’t Keep up With Modern Markets – Ours Can"
    Alex Stein, Global Head Business Development, Liquid-Markets-Solutions
  "Understanding the intricacies of Market Data Feeds using a combination of FPGA’s, Capture Tools and Management Visibility"
    Iain Kenney, Sr. Director & Head of Product Management, cPacket


Advancing open source in FPGAFast Data   Fast Compute

FSI software engineers have long benefited from abundant open-source projects, but firmware engineers still have few good options. This was unsurprising when the developer community was small, but now that FPGA development is widespread in finance and other industries, the time is ripe for change. How can financial firms propel the evolution of open source in FPGA? What projects can benefit from collaboration without participants losing proprietary advantages? In what ways can firms pool resources, work with vendors, and leverage the global community to reduce business costs? What changes are needed to both open- and closed-source toolchains to accelerate collaborative projects? Join our panel of experts as they discuss these questions and yours. To kick off, one of the panelists provided a short presentation:

  "Low-latency layer 3 and FPGAs"
   Dr. David Snowdon, Director of Engineering, Arista Networks