Storage & memory revolution

For most of the history of computing, software and hardware architectures have been built on the assumption that non-volatile storage (cards, tape, disk, SSD) requires much more time for reading and writing data than volatile storage (RAM). While that is still true, the gap has narrowed to a point that encourages new modes of thinking--both by technologists and by users.

STAC's testing of read-intensive analytics on state-of-the-art systems has shown a nearly 200x improvement in speed over the last seven years. That means that analytics that once took several hours can now be executed in a minute or two, and those that used to take minutes can now effectively be performed in real time. And the pace of improvement has shown no signs of slowing.

The ability to interactively explore more data with more sophisticated questions is having big implications for the capital markets, including paving the way for automated analysis (see Artificial Intelligence).
 
The increases in speed stem primarily from three major improvements: 

1) Faster media, particulary NAND flash memory and emerging alternative forms of solid-state memory. Flash has much lower latency than spinning disk and has is projected to continue dropping in price rapidly. And STAC tests of new media such as 3DXpoint show that even better performance is possible.

2) Faster interconnects. The commoditization of higher-bandwidth/lower-latency networking (largely but not limited to Ethernet) is providing massive pipes to shuttle information between fast media and the point of consumption.

3) Software that is able to take advantage of these hardware advances. This includes new products for data management,  enhancements to traditional products, and helper products that enable existing software to take better advantage of new hardware.

These innovations are coming together in a sort of "Cambrian explosion" of new architectures that constitute creative combinations of media, form factors, interconnects, software, and entire solution designs. Understanding how each of these new "organisms" behaves when confronted with key financial services workloads is an ongoing interest of the STAC Benchmark Council. STAC Summits have become the premier place to discuss these emerging architectures and their potential for the trading and investing industry. And the empirical question is well suited for benchmarks based on two of the most common big data workloads in the capital markets: enterprise tick analytics and strategy backtesting. The ongoing flow of benchmark results provides key insights into which of these new organisms will thrive in the future.