STAC Summit, 6 Jun 2019, NYC

STAC Summits

Add to calendar:    Google    Outlook    ical    Yahoo

STAC Summits bring together industry leaders in ML/DL engineering, data engineering,
system architecture, application development, infrastructure engineering, and
operational intelligence to discuss important technical challenges in the finance industry.
Come to hear leading ideas and exchange views with your peers.

WHEN
Thursday, 6 June 2019
STAC Exchange (exhibits) opens at 8:30am
Meeting starts at 9:00am
Networking Lunch at ~12:00pm
Conference concludes ~5:00pm
Reception immediately following.

WHERE
New York Marriott Marquis, 1535 Broadway, New York
Astor Ballroom


Partial Agenda (check back for updates!)

 

Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data

 


STAC Update: AI WorkloadsBig Data   Big Compute   
 

Michel will discuss progress toward benchmark standards for ML/DL techniques and technologies based on problems that finance firms in the STAC Benchmark Council have said they care about.

Panel: AI and the engineerBig Data   Big Compute   
 
  • Speakers to be announced

Financial firms are increasingly using machine learning or deep learning (broadly, “AI”) to improve outcomes or automate functions. Like many other key components of a business plan, AI is a team sport. A brilliant researcher can't be productive without a lot of help with data, technology, and process management. Fortunately, many of these skills exist already in the finance industry, whether it is managing large amounts of data, designing systems for high performance, or managing uncertain processes in an agile way. What are the key engineering and development skills needed on effective data science teams? What technologies are most crucial to understand today and in the near future? How much data science knowledge does a technologist need in order to be a highly effective data engineer or ML engineer? Which parts of the process are most likely to be automated, purchased, or outsourced in the near future, and which parts will continue to rely on in-house (human) technologists? Our panel will discuss.

Innovation RoundupBig Data   Big Compute   
  "Bigstream: The Autonomous Data Platform for Accelerated AI"
    Roop Ganguly, Chief Solutions Architect, Bigstream
  "How Your Storage Architecture Mismatch is Hurting Machine Learning Performance"
    Björn Kolbeck, Co-founder/CEO, Quobyte
  "Beyond Low Latency Market Data – Real-Time Signals-as-a-Service"
    David Taylor, Chief Technology Officer, Exegy
  "AI is hot. Here’s how to cool it."
    Dave Weber, Wall Street CTO & Director, Lenovo

 

Break

 

Productionizing ML & DL models at scaleBig Data
 

Recent years have seen tremendous advances in tools and processes to facilitate the training of machine learning and deep learning models. However, putting these models into production is not as well supported. The CI/CD ecosystem that has evolved for deploying and monitoring traditional web applications is a rich starting point but doesn’t support some key requirements of ML/DL models. At Paperspace, Raj works with clients to overcome these challenges and build out effective CI/CD pipelines for AI. In this talk, he’ll share some emerging patterns, state-of-the-art methods, and best practices that leading companies are using to productionize their models.

Innovation RoundupBig Data   Big Compute   
  "Taming Monte Carlo with Intel PSG libraries"
   Mutema Pittman, Director of Enterprise Business Division, Programmable Solutions Group, Intel
  "18X Faster -FRTB- 60B Global Trades"
    William Hill, Lead Data Scientist, for Axellio
  "Effective, lossless, and fast compression for timeseries"
    Feargal O'Sullivan, CEO, for QuasarDB
  "Levyx: System Software for Computational Storage in Fintech"
    Matt Meinel, SVP Solutions Architecture, Levyx

 

STAC Update: Big WorkloadsBig Data   Big Compute   
 

A lot of innovative solutions for big data and big compute workloads like enterprise tick analytics, strategy backtesting, and market risk are being subjected to STAC Benchmarks. If he can catch his breath, Michel will take us through the latest research.

Innovation RoundupBig Data   Big Compute   
  "A Storage Architecture to Accelerate kdb+ "
    Andy Flesch, Regional Sales Manager, Weka.IO
  "Billions of files, PB of data, what do I do now?"
    Christian Smith, Vice President, Product, Igneous
  "Economics of Performance. An Oxymoron or Reality?"
    Brett Miller, Field Chief Technology Office, Violin
  "Memorize the Future"
    Charles Fan, Co-founder and CEO, MemVerge
  Title to be announced
    Speaker to be announced, Minio

 

Networking Luncheon

 

Innovation RoundupBig Data   Big Compute   
  "How the quest for profits led to the quest for scale"
   Alex Usatin, Advisory Systems Engineer in Unstructured Data Solutions team, Dell EMC
  "Managing NVMe Storage at Rack-Scale"
    V.R. Satish, CTO, Pavilion Data Systems
  "What if you don’t have to move data?"
    Steve Wallo, CTO, Vcinity

 

Compute, meet Data: Accelerating I/O at the micro and macro levelBig Data   Big Compute   
 

Ever-increasing demands from the business, the growth of new AI workloads, and the increasing pull of public clouds call for faster and more flexible ways to get data where it’s needed in computations. Fortunately, the vendor community is competing and collaborating to solve these problems. Innovations include standards like NVMe and NVMe over Fabrics, persistent memory with interesting properties, new and enhanced distributed file systems, rapidly evolving cloud services, and products that claim to substantially mitigate problems we expect when compute and storage are separated by long distances. Sorting through this Cambrian explosion of offerings is a good problem to have, but a problem nonetheless. How should application developers think about memory and storage today? What’s possible with file, block, and object storage? When is public cloud a realistic option, and when not? What are the promise and perils of disaggregated storage? Our panel of experts will debate.

STAC Update: Fast DataFast Data
 

Peter will discuss the latest research and Council activities related to low-latency/high-throughput realtime workloads.

Innovation RoundupFast Data   
  "Enyx nxLink Product Update & Announcements"
    Arnaud Derasse, Co-founder & CEO, Enyx
 "Advanced FPGA Design Debug with Questa “Visualizer” from Mentor"
    David Lidrbauch, Product Marketing Manager, Mentor
  "Novasparks update, Tick To Trade: lets talk Numbers !"
    Cliff Maddox, Director of Sales, NovaSparks

 

Break

 

Innovation RoundupFast Data   
  "Streaming Telemetry for your Low Latency Network"
    Jag Tamvada, Director, Product Management, Cisco
  "Leveraging Real Systems Monitoring."
    Theo Schlossnagle, CTO, Circonus
  "New Time Services Available from Orolia"
    Trevor Bertrand, Senior Inside Sales, Orolia
  "X-Rated"
    Davor Frank, Director, WW Sales Engineering, Solarflare

 

Understanding an ultra-fast market through ultra-accurate time syncFast Data   Fast Compute   
 

Deutsche Boerse has invested heavily to instrument their exchange in order to monitor compliance, fairness, and other critical factors at the micro level. But doing so while the tick-to-trade latency of their members rapidly contracts has required increasingly accurate time synchronization at large scale. Toward this end, Deutsche Boerse has gained valuable experience deploying White Rabbit (roughly PTP with synchronous Ethernet) for time synchronization in their co-location network capture infrastructure. They have also built a data service that makes White Rabbit synchronized timestamps available to exchange members, enabling them to connect to Deutsche Boerse's White Rabbit master. In this talk, Sebastian and Andreas will discuss lessons learned through these projects and potential plans for further enhancements to the exchange's time sync architecture, including the possibility of something radically different. They will also reveal what ultra-accurate timestamps can tell us about the current state of the latency race among trading firms. Come ready to ask questions and toss around ideas.

Panel: Time sync & capture in 2019Fast Data   Fast Compute   
 
  • Speakers to be announced

Maintaining operational intelligence in the high-frequency battlefield continues to get more challenging. Understanding events in the system with clarity means the ability to capture the events, understand exactly when they happened, and pull those events into meaningful analysis. Our panelists will discuss these challenges and opine on the best ways to solve them. What kind of accuracy do trading firms need in the near future, and what kind of accuracy is possible? Are these even separable questions, or do new capabilities create their own demand? What kind of network does high accuracy require? How are 40G and higher bandwidths affecting requirements for capture? How are advances in storage and memory affecting capture architectures? What’s changing and what’s not in the build-vs-buy equation?

~5:00pm Networking Reception