Software Architecture
The Esther software architecture is elegant in its simplicity. Esther has three main parts:
-
A high-productivity development environment for models, based on its domain-specific language, Esther DSL
-
A compiler that generates high-performance code for hardware-accelerated compute platforms, such as GPUs
-
A universal solver that computes all models presented by the compiler
These are made available in two Docker containers:

Esther Solver is stateless in nature. Security is enhanced by not needing any local end-user data storage, and by the the container being transient for the duration of the model run. All communications are over TLS and even the payloads can be encrypted or obfuscated if necessary.
Integration
Esther can be integrated flexibly in different application and data contexts. The Esther Solver always remains the same software, with as many instances running as necessary. While Esther Compiler and the Solver are easy to run as containers, other deployment methods are possible for embedding models in existing applications. Examples to illustrate deployment options:

Deployment Options
Two Docker Containers are required to run Esther. They can be flexibly deployed into a public cloud or a private in-premises cloud infrastructure. As a further option for quants and developers, the containers can be deployed to powerful laptop or desktop.

Trial SaaS
We offer a full SaaS option for short, focused trials of Esther where the customer is happy to use obfuscated data.
Pilot with Private Data
A larger pilot may require private data to be used. We can can facilitate this by offering only the Esther Solver as SaaS. The Compiler container can be deployed to the customer's private cloud account.


Full Private Deployment
Once a customer is fully licensed, Esther can be deployed into their private environments.
Scaling Illustrations
Esther was designed for large-scale portfolio risk analytics. It can handle entity-level XVA calculations for very large portfolios, as Esther utilises compute infrastructure very efficiently. A single model can use one or many GPU-enabled Solver containers simultaneously.

Data/Model Pre-calibration
Esther has a job queue management capability that manages the jobs processing. I handles any failed jobs and resubmits them to available nodes.
The main requirement from the Container Service is the inventory of available Esther Solver containers.
The minimum hardware requirement is approx. 10 GB of GPU memory for each job. New GPUs with large memories will run the jobs faster than older ones.

Intra-day Pricing
Incremental derivatives pricing with an in-memory portfolio. The Compiler container retains the full portfolio in memory. When a new pricing request comes in, the Compiler will send a pricing job to a GPU (or set of GPUs) that holds the relevant portfolio scenarios in memory to execute the incremental calculations.
Large Portfolio Risk Models

Very large portfolio risk model runs can use a similar architecture, parallelising and/or batching the processing across one or more available GPUs. Large-memory Solver containers with multiple GPUs can handle entity-level portfolios for Tier-1 banks.
Detailed scenario/position/risk factor analysis, e.g. for Reverse Testing can be conducted following a model run where the scenarios are still help in memory.
Please note: While the above illustrations use AWS notation, Esther can be deployed to any public or private cloud that offers standard container services and GPU-enabled, large-memory server infrastructure.