Over the past decade we have implemented many specialized computations that process large amounts of real-time data, such as market-data feeds, sensor-data feeds, reference-data feeds, etc., to compute various kinds of derived data cross- and forward calculations, blending algorithms, data filters, etc. Most such computations are conceptually straightforward. However, the input data is usually changing quickly and the computations have to be distributed across a cluster of computers in order to minimize delays in calculating derived data. The issues of how to parallelize the computation, distribute the data, and handle failures collude to complicate the original simple computation with large amounts of complex code to deal with these issues.
As a solution to this complexity, we designed a new abstraction that allows us to express the simple computations we were trying to perform but hides the messy details of parallelization, fault-tolerance, data distribution and load balancing in a library. The abstraction consists of publisher and worker tasks and a definition of their dependencies. Publisher tasks are usually adapters to external data sources, whereas worker tasks depend on data from publisher tasks or the result of other worker tasks. The dependencies are initialised by the worker tasks when a stream is subscribed and can be different between streams. The use of a hierarchical model of worker nodes allows us to parallelize large computations easily and to use re-subscribe as the primary mechanism for fault tolerance.
The main feature of Cortex is a simple and powerful interface that enables automatic parallelization and distribution of large-scale computations, combined with an implementation of this interface that achieves high performance on a cluster of computers.