Software Architecture
The Esther software architecture is elegant in its simplicity. Esther has three main parts:
​
-
A high-productivity development environment for models, based on its domain-specific language, Esther DSL
-
A compiler that generates high-performance code for hardware-accelerated compute platforms, such as GPUs
-
A universal solver that computes all models presented by the compiler
These are made available in two Docker containers:
Esther Solver is stateless in nature. Security is enhanced by not needing any local end-user data storage, and by the the container being transient for the duration of the model run. All communications are over TLS and even the payloads can be encrypted or obfuscated if necessary.
Data/Messaging Model for Vectorisation
​
STEM is the Smart Trade Esther Message. We introduced it to represent all trades across all asset classes in a concise form suitable for analytics.
​
STEM is eminently simple: only 151 lines of protobuf code usable directly from virtually all language environments.
​
The FinOS/ISDA CDM is also a smart trade standard is tailored for contract life-cycle management, position keeping and payments, it is overlayed on top of FpML and thus has millions of lines of code. It covers virtually all trade types of the past but there could in principle be future trade types that are not covered. STEM instead was developed for the purpose of risk analytics, is far leaner because it matches the mathematical structure of trades instead of listing attributes. STEM is also far more general as it covers all trade types of the past and all not-yet-invented trade types of the future.
​
What if you have an existing risk analytics platform with a complex user interface and a custom data representation already in place? Can you use Esther to vectorise the logic and have a quantum leap in performance and scalability on modern hardware platforms?
​
The answer is YES.
​
Once the portfolio is captured as a STEM message, Esther can then orchestrate the execution of any analytics.
This presentation explains STEM in detail, code-line by code-line.
Integration
Esther can be integrated flexibly in different application and data contexts. The Esther Solver always remains the same software, with as many instances running as necessary. While Esther Compiler and the Solver are easy to run as containers, other deployment methods are possible for embedding models in existing applications. Examples to illustrate deployment options:
Deployment Options
Two Docker Containers are required to run Esther. They can be flexibly deployed into a public cloud or a private in-premises cloud infrastructure. As a further option for quants and developers, the containers can be deployed to powerful laptop or desktop.
Trial SaaS
We offer a full SaaS option for short, focused trials of Esther where the customer is happy to use obfuscated data.
Pilot with Private Data
A larger pilot may require private data to be used. We can can facilitate this by offering only the Esther Solver as SaaS. The Compiler container can be deployed to the customer's private cloud account.
Full Private Deployment
Once a customer is fully licensed, Esther can be deployed into their private environments.
Scaling Illustrations
Esther was designed for large-scale portfolio risk analytics. It can handle entity-level XVA calculations for very large portfolios, as Esther utilises compute infrastructure very efficiently. A single model can use one or many GPU-enabled Solver containers simultaneously.
Data/Model Pre-calibration
Esther has a job queue management capability that manages the jobs processing. I handles any failed jobs and resubmits them to available nodes.
The main requirement from the Container Service is the inventory of available Esther Solver containers.
The minimum hardware requirement is approx. 10 GB of GPU memory for each job. New GPUs with large memories will run the jobs faster than older ones.
Intra-day Pricing
Incremental derivatives pricing with an in-memory portfolio. The Compiler container retains the full portfolio in memory. When a new pricing request comes in, the Compiler will send a pricing job to a GPU (or set of GPUs) that holds the relevant portfolio scenarios in memory to execute the incremental calculations.
Large Portfolio Risk Models
Very large portfolio risk model runs can use a similar architecture, parallelising and/or batching the processing across one or more available GPUs. Large-memory Solver containers with multiple GPUs can handle entity-level portfolios for Tier-1 banks.
Detailed scenario/position/risk factor analysis, e.g. for Reverse Testing can be conducted following a model run where the scenarios are still help in memory.
Please note: While the above illustrations use AWS notation, Esther can be deployed to any public or private cloud that offers standard container services and GPU-enabled, large-memory server infrastructure.