The Esther software architecture is elegant in its simplicity. Esther has three main parts:
A high-productivity development environment for models, based on its domain-specific language, Esther DSL
A compiler that generates high-performance code for hardware-accelerated compute platforms, such as GPUs
A universal solver that computes all models presented by the compiler
These are made available in two Docker containers:
Esther Solver is stateless in nature. Security is enhanced by not needing any local end-user data storage, and by the the container being transient for the duration of the model run. All communications are over TLS and even the payloads can be encrypted or obfuscated if necessary.
Esther can be integrated flexibly in different application and data contexts. The Esther Solver always remains the same software, with as many instances running as necessary. While Esther Compiler and the Solver are easy to run as containers, other deployment methods are possible for embedding models in existing applications. Examples to illustrate deployment options:
Data/Messaging Model for Vectorisation
STEM is the Smart Trade Esther Message. We introduced it to represent all trades across all asset classes in a concise form suitable for analytics.
STEM is eminently simple: only 151 lines of protobuf code usable directly from virtually all language environments.
The FinOS/ISDA CDM is also a smart trade standard is tailored for contract life-cycle management, position keeping and payments, it is overlayed on top of FpML and thus has millions of lines of code. It covers virtually all trade types of the past but there could in principle be future trade types that are not covered. STEM instead was developed for the purpose of risk analytics, is far leaner because it matches the mathematical structure of trades instead of listing attributes. STEM is also far more general as it covers all trade types of the past and all not-yet-invented trade types of the future.
What if you have an existing risk analytics platform with a complex user interface and a custom data representation already in place? Can you use Esther to vectorise the logic and have a quantum leap in performance and scalability on modern hardware platforms?
The answer is YES.
Once the portfolio is captured as a STEM message, Esther can then orchestrate the execution of any analytics.
This presentation explains STEM in detail, code-line by code-line.
Performance Metrics on Apple M2
XVA Model Run Metrics
We test for performance two multi-asset counterparty credit risk portfolios with about 720,000 trades in 6 currencies, interest rate, equity and FX derivatives. The two portfolios differ in that one has 500 counterparties and the other 3000. This approximates a medium-sized and large-sized "core" XVA portfolio by banks, where core implies that counterparty credit factors are best modeled dynamically.
We simulate over 90 monthly time intervals for a total of 60,000 (resp. 10,800) scenarios. Interest rate derivatives are modeled by two factor interest rate models with local volatility, stochastic drift and jumps. Equity and FX derivatives are modeled by stochastic local vol models with jumps (SLVJ). Counterparty credit is modeled with a credit-equity model of the SLVJ type. We calculate counterparty specific and legal-entity-level XVA metrics including CVA, FVA, KVA, PFE and carry out a reverse stress analysis.
We run on a single Apple M2 Max processor with 96 GB of unified memory for its 12 CPU and 38 GPU cores. The first model runs in under 9 minutes and consumes 3 WHr. The second runs in 16'20'' and consumes 4.26 WHr. These numbers are fractions of traditional models' timings and energy consumptions.
Here are the recorded performance demos:
and here is a sample model run report for this sort of analysis with results and additional details.
Wrong-way Risk Model Run Metrics
We consider a portfolio of single stock and index equity options, including forwards, futures, European and American put and call options. The portfolio is automatically generated and includes about 500k trades and we assume the Clearing House has 60 members.
Members are divided by trading strategy: random, only-long, only-short, insurance portfolios (with only short puts) and portfolios with a random allocation. We run a total of 10,000,000 correlated scenarios assuming that both the underlying equity risk factors and the counterparty credit risk factors are modeled by defaultable stochastic-local volatility models with jumps. We postulate two kinds of jumps, small and frequent ones and large and rare ones. Both kinds of jumps are correlated with the stochastic driver for volatility which jumps up whenever the underlying jumps.
We define the WWR add-on as the collateral amount such that the counterparty CVA calculated assuming correlation and including the WWR add-on equals the CVA calculating neglecting correlations. We also carry out a Reverse Stress Testing analysis identifying the riskiest scenarios. All metrics are calculated at the legal entity level. The calculation completes in 10':16" and consumes 2.88 WHr of energy on the same Apple M2 Max processor as above.
Here is the recorded porformance demo:
For here is the model run report with the results and additional details.
Two Docker Containers are required to run Esther. They can be flexibly deployed into a public cloud or a private in-premises cloud infrastructure. As a further option for quants and developers, the containers can be deployed to powerful laptop or desktop.
We offer a full SaaS option for short, focused trials of Esther where the customer is happy to use obfuscated data.
Pilot with Private Data
A larger pilot may require private data to be used. We can can facilitate this by offering only the Esther Solver as SaaS. The Compiler container can be deployed to the customer's private cloud account.
Full Private Deployment
Once a customer is fully licensed, Esther can be deployed into their private environments.
Esther was designed for large-scale portfolio risk analytics. It can handle entity-level XVA calculations for very large portfolios, as Esther utilises compute infrastructure very efficiently. A single model can use one or many GPU-enabled Solver containers simultaneously.
Esther has a job queue management capability that manages the jobs processing. I handles any failed jobs and resubmits them to available nodes.
The main requirement from the Container Service is the inventory of available Esther Solver containers.
The minimum hardware requirement is approx. 10 GB of GPU memory for each job. New GPUs with large memories will run the jobs faster than older ones.
Incremental derivatives pricing with an in-memory portfolio. The Compiler container retains the full portfolio in memory. When a new pricing request comes in, the Compiler will send a pricing job to a GPU (or set of GPUs) that holds the relevant portfolio scenarios in memory to execute the incremental calculations.
Large Portfolio Risk Models
Very large portfolio risk model runs can use a similar architecture, parallelising and/or batching the processing across one or more available GPUs. Large-memory Solver containers with multiple GPUs can handle entity-level portfolios for Tier-1 banks.
Detailed scenario/position/risk factor analysis, e.g. for Reverse Testing can be conducted following a model run where the scenarios are still help in memory.
Please note: While the above illustrations use AWS notation, Esther can be deployed to any public or private cloud that offers standard container services and GPU-enabled, large-memory server infrastructure.