AI and Finance
Vision for Artificial Intelligence for Financial Services
Decision-making processes in financial firms are notoriously challenging for human intelligence to grapple with due to the large amounts of data to consider. Traditionally, humans have taken a divided approach, creating silos across asset class dividers and different functions such as trading, risk management, model risk, legal, and client relations. This form of separation, however, impedes holistic views and global optimisations at the firm level and is quite resource-intensive. Artificial intelligence, with its superior ability to manage large volumes of data, can help break down these silos and elevate the decision-making process to unprecedented levels of generality and optimisation that would otherwise be unattainable by humans.
Large Language Models (LLMs) are highly proficient at generating descriptive code. Human interfaces will mostly be in natural languages such as plain English, and LLMs will translate them into code. With LLMs as programmers, risk analytics will be configured as modelling languages, rather than APIs.
Stored data tokens will be semantically complete, enabling us to see not only smart contracts but also smart models and analytics definitions. Data storage will evolve to have features of traditional databases and GIT/source control systems, and data will be block-chained. Analytics will be tokenized alongside other smart contracts.
For AI to become truly general, it has to provide a comprehensive, holistic view and optimise based on enterprise-level modelling.
Therefore, risk analytics will need to be generic, not specific to particular models, trades, or asset classes. Market data will be smart and distributed as estimated/calibrated model definitions that can be used across all use cases.
Risk analytics must also be scalable and executable on the full scope of legal entity-level portfolios. To achieve this, they need to be optimized for vector processors like GPUs and run on AI servers. Additionally, risk analytics must be programmable in a modeling language that spans the entire risk and pricing domain, packaged as a compiler-solver combination.
Risk models will be formulated in terms of generic Hidden Markov Models and correlation models to match the language of Machine Learning algorithms and cross forward-looking with backward-looking information. This will enable risk analytics to be forward-looking and adaptable to changing market conditions.
LLMs will serve as orchestrators
LLMs will iterate the cycle from data capture to modeling, scenario generation, strategy optimization, model risk assessment, backtesting, trade automation, and model revision. Humans will have a supervisory role to ensure alignment, and explanatory interfaces will be necessary to facilitate communication between humans and AIs.
The use of AIs as orchestrators will enable a streamlined and efficient risk management process that leverages the power of machine learning and automation. The process will be adaptable to changing market conditions and will minimize errors that result from human intervention.
All Large Language Models have one main blind spot: Solving mathematical problems.
Bill Gates: “Math is a very abstract type of reasoning. Right now, I’d say that’s the greatest weakness [of GPT4]. Weirdly, it can solve lots of math problems. There are some maths problems where if you ask it, to explain it in an abstract form, make essentially an equation or a program that matches the math problem, it does that perfectly and you could pass that off to a normal solver.”
[From: Bill Gates on AI and the rapidly evolving future of computing]
Dr Stephen Wolfram: “People say, what’s going to happen to all the programmers,” said Wolfram at an event for the launch of the Alpha GPT extension. “It’s like, what’s going to happen to everybody who does boilerplate […] documents of various kinds? That’s kind of going away. And similarly, people have rushed into […] going into computer science school and learning how to write Java code, Python code, whatever else it is. And it’s like, a lot of that is just going to go away.”
[From: CHATGPT + WOLFRAM - THE FUTURE OF AI! - YouTube]
Global Valuation Esther is the first and only solver for all mathematical problems in the risk and pricing domain.
The Esther compiler is based on domain-specific language extensions to leading programming languages that allow to express math problems in risk and pricing and factor out in full generality the complexities involved in orchestrating a high performance execution strategy.
Adding Strategy Optimisation to LLMs
GPTs are incredibly capable when it comes to most AI tasks, with the one exception being planning skills. To master these skills, they must learn decision theory under uncertainty. This theory relies on constructing probabilistic models to represent the problem at hand, collecting data to calibrate model parameters, generating scenarios for the possible future evolutions of the observable system, and optimizing actions to this scenario set. Strategies can then be refined by back-testing them in time.
Unfortunately, creating such models traditionally requires PhD-level mathematicians to deploy a range of applied math tools, and the hardware compute costs are also high. This has made mathematical finance, the foundation of pricing theory and risk management, the most advanced area of decision theory under uncertainty.
Global Valuation has developed a generic compiler-solver combination for a vast array of problems in this domain. The technology employs a novel mathematical framework that avoids the need for mathematical shortcuts, greatly simplifying the modelling language. Plus, Esther's vectorisation technique utilises operator algebras and matrix operations, which are optimised for AI servers, resulting in unparalleled levels of performance.
Combining GPTs with Esther can dramatically reduce hardware costs and simplify model building to the point that it can be entirely automated. This, in combination with the cost of data access that GPTs can enable, can trigger a quantum leap in the use cases of decision theory under uncertainty. The primary and immediate application domain is risk and pricing for banks, clearing houses, investment buy sides, and regulators. However, the range of applications will be vast and far broader than the Finance domain once costs of access are drastically reduced.
Integrating Esther with Large Language Models
The integration of Large Language Model solutions with Esther promises to raise the quality level of strategy design and implementation to new levels.
Esther adds maths intelligence to LLMs, a cross-industry and cross-asset capability to add the use of stochastic control theory into LLMs. This capability is useful in Finance problems from valuation to strategy optimisation, resource allocation, risk management and model risk.
LLM naturally interface with the Esther Compiler since the Esther language extensions are isomorphic to natural languages apart for boiler plate code decorations. LLMs excel at defining mathematical problems but require a generic mathematical solver such as Esther Solver to calculate
The Esther Solver was designed from the ground up to execute on the same GPU platforms on which LLMs run.
Esther is based on models in the same HMM class which Machine Learning algorithms are designed to around. Esther calibrates HMM models based on forward looking information such as asset valuations while ML algorithms estimate HMM models based on historical time series. The two approaches are complementary and mutually reinforce and correct one another.
Both Esther and LLM models are designed to scale, aggregate large amounts of information and project out scenarios at the legal entity level of aggregation. This promises to remove inefficiencies due to a management silos in organisations.
Financial Services Monetises the Investment in AI
All financial services firms will need an intelligent AI platform. The platform will provide three main capabilities:
Natural language intelligence that coordinates all activities with users and the rest of the platform
Data preparation and model calibration supported by machine learning
Mathematical problem solving capable of addressing decision-making under uncertainty
The platform needs to marry forward-looking and backward-looking analytics, coordinated by a LLM that provides a natural language interface to users.
The AI layer offers automation in the preparation of incoming data, such as news, corporate actions, contracts, market data, etc. This will accelerate portfolio migration into smart contracts on ledgers. LLMs can be complemented with other tools proficient at complex unstructured data.
For Pricing and Risk analytics, the AI layer allows the user to freely change model assumptions, quickly adapt to a new portfolio dataset, experiment with new definitions of analytics and produce 3D graphics for risk drill downs by voice or chat interaction.
Please note: We can work with alternative Large Language Model (LLM) platforms, such as the rapidly evolving open source solutions. For data privacy and security reasons, the LLM requires a private deployment.