Skip to main content

AI in Asset Management-A Lightbulb Moment

· 5 min read
Harry Moore
Harry Moore
Principal, Man AHL
Martin Luk
Martin Luk
Quant Researcher, Man AHL
Matthew Hertz
Matthew Hertz
Head of Machine Learning Technology, Man Group

This blog is generated by AI and the original article is published in https://www.man.com/maninstitute/AI-asset-management-lightbulb-moment

Header Image

Welcome to our latest discussion on the transformative role of AI in asset management. In this article, we explore how technology is not just automating tasks but reshaping strategic investment decisions. 📈

Key Takeaways​

  • Electricity is one of the most powerful inventions in human history, transforming communication, transportation, industry, and entertainment.
  • Artificial Intelligence (AI), especially generative AI, is having a similar transformative impact, creating new features that were previously unimaginable.
  • At Man AHL, generative AI is being used for data augmentation, feature engineering, data extraction, and portfolio construction.

The Rise of Generative AI​

The invention of the electric light bulb by Thomas Edison in 1879 took decades to achieve mass adoption. Similarly, AI technology has gone through a gradual process of acceptance and widespread application. Today, generative AI is rapidly changing various industries, including asset management.

AI Classification

Generative AI is a subset of machine learning, which itself is a subset of broader AI. Generative AI enables users to interact with models using natural language and generate new outputs, significantly driving the recent excitement around AI.

Applications of Generative AI at Man AHL​

At Man AHL, we find that while generative AI has not yet replaced researchers or portfolio managers, it has significantly boosted productivity, allowing quantitative analysts ('quants') to focus more on alpha generation. Here are four specific use cases:

1. Coding with Copilot​

Tools like GitHub Copilot can accelerate the development of working prototypes and initial research results by predicting code continuations. This reduces development time and facilitates knowledge sharing. For example, developers can ask the AI to explain various parts of code written by others.

We are developing chatbots capable of understanding our internal code. For instance, one chatbot can identify where to find metadata for a market code and retrieve timeseries prices, specifying the correct libraries and fields, saving time. This capability enhances our efficiency and leverages our proprietary knowledge.

2. Extracting Information for Trading​

Man AHL started as a commodity trading advisor (CTA) trading futures contracts. As the business has grown and diversified, we now trade more novel and exotic instruments like catastrophe bonds. Each catastrophe bond has unique features that need to be clearly understood before investment.

We are testing a process where data extraction is done by ChatGPT, putting the relevant information into a systematic template for review. This frees up an analyst's time to focus on new research.

3. Assisting with Investor Queries​

Our Client Relations team handles various questions from clients about Man AHL's systematic investment strategies. Many questions require extracting information from different investment materials like factsheets, presentations, due diligence questionnaires, and investment commentaries.

ChatGPT can automate several steps in this process. First, it can extract the required information from relevant documents. Second, it can draft a response ready for human analyst review. This efficiency frees up time for the team to focus on higher-value tasks.

Client Query Example

4. Analyzing Macro Data​

In quantitative macro research, ChatGPT can serve as a hypothesis generator, suggesting whether a particular economic timeseries has a fundamentally justifiable relationship with a certain market. These hypotheses can then be tested using statistical back-testing methods.

While ChatGPT will not replace our macro research team in its current state, its understanding can be as good as a graduate researcher. The main difference is that a human researcher needs breaks, while ChatGPT can systematically query thousands of relationships and potentially suggest signals.

Lessons Learned​

  • Managing Hallucinations: ChatGPT's responses cannot be fully trusted. To mitigate hallucinations, we use tools to highlight where information occurs in the original text, aiding human checking.
  • Prompt Engineering: If ChatGPT cannot perform a task well, it is often due to a misspecified prompt. Perfecting prompts requires significant resources, trial and error, and specific techniques.
  • Breaking Down Tasks: ChatGPT cannot logically break down and execute complex problems in one go. Effective 'AI engineering' involves splitting projects into smaller tasks, each handled by specialist instances of ChatGPT with tailored prompts.
  • Education for Wider Adoption: Understanding ChatGPT's capabilities and limitations is crucial. Skeptics should see its strengths, while enthusiasts need to learn its failures.

References​

  1. Ajay Agrawal, A., Gans, J. and Goldfarb, A. ‘Power and Prediction: The Disruptive Economics of Artificial Intelligence’ (2022)
  2. Luk, M., ‘Generative AI: Overview, Economic Impact, and Applications in Asset Management’, 18 September, 2023. Available at: SSRN or DOI
  3. Ledford, A. ‘An Introduction to Machine Learning’, 2019. Available here
  4. Korgaonkar, R., ‘Diary of a Quant: AI’, 2024. Available here
  5. Pensions and Investments, ‘Man Group CEO sees generative AI boosting efficiency, but not investment decisions’, 30 April 2024. Available here
  6. Korgaonkar, R., ‘Diary of a Quant: Journeying into Exotic Markets’, 2024. Available here
  7. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D. and Sutskever, I., ‘Language models are unsupervised multitask learners’, 2019. OpenAI blog, 1(8), p.9
  8. Eloundou, T., Manning, S., Mishkin, P. and Rock, D., 2023. ‘Gpts are gpts: An early look at the labor market impact potential of large language models’. arXiv preprint arXiv:2303.10130
  9. Bloomberg, Odd Lots podcast, ‘How Humans and Computers learn from each other’, 2 May 2024. Available here

For a deeper exploration, visit the original article on Man Group's website: AI in Asset Management: A Lightbulb Moment.

Welcome to LLMQuant

· One min read
Haoxue Wang
Founder of LLMQuant | Mathematics@University of Cambridge | HSBC | Microsoft Research
Xinjie Shen
Co-Founder of LLMQuant
Kevin Feng
Co-Founder of LLMQuant

LLMQuant is a vibrant community focusing on LLM (large language model) and Quant research. We aim to leverage AI to quantitative research with feasible collection of techniques and scenarios.

Our community logo is:

Docusaurus Plushie

Leveraging Large Language Models (LLMs) in Execution Quant

· 7 min read
Haoxue Wang
Founder of LLMQuant | Mathematics@University of Cambridge | HSBC | Microsoft Research

In the rapidly evolving world of quantitative finance, staying ahead of the curve requires embracing the latest technological advancements. One such groundbreaking innovation is the use of Large Language Models (LLMs) like GPT-4. Traditionally seen as tools for natural language processing, LLMs are now proving their worth in various quantitative finance applications, including Execution Quant. This blog explores how LLMs can be effectively utilized in this specialized field.

Understanding Execution Quant​

Execution Quant involves developing and implementing strategies to execute trades efficiently, minimizing costs and market impact. This role requires analyzing vast amounts of data, understanding market microstructure, and continuously optimizing trading algorithms. The key objectives include achieving the best possible execution prices, reducing slippage, and maintaining anonymity in the market.

The Role of LLMs in Execution Quant​

LLMs, with their ability to process and generate human-like text, offer several advantages in Execution Quant. Here are some key areas where LLMs can be transformative:

  1. Market Sentiment Analysis:

    • Natural Language Processing: LLMs can process news articles, social media posts, and financial reports to gauge market sentiment. By understanding the prevailing market mood, execution quants can adjust their strategies to align with bullish or bearish trends.
    • Real-time Insights: LLMs can provide real-time updates on market sentiment, enabling traders to make informed decisions quickly.
  2. Algorithmic Trading:

    • Strategy Development: LLMs can assist in developing new trading strategies by analyzing historical data and identifying patterns that humans might overlook. These models can suggest innovative approaches to execution based on data-driven insights.
    • Backtesting and Simulation: LLMs can simulate various market conditions and backtest trading strategies against historical data, ensuring that the strategies are robust and effective.
  3. Market Microstructure Analysis:

    • Order Book Analysis: LLMs can analyze order book data to identify liquidity trends and predict short-term price movements. This information is crucial for optimizing order placement and minimizing market impact.
    • Trade Execution Optimization: By understanding the intricacies of market microstructure, LLMs can suggest optimal execution paths that reduce slippage and trading costs.
  4. Risk Management:

    • Predictive Analytics: LLMs can forecast potential risks by analyzing market data and historical trends. This predictive capability helps execution quants in pre-empting adverse market conditions and adjusting their strategies accordingly.
    • Scenario Analysis: LLMs can generate various market scenarios and evaluate the potential impact on trading strategies, allowing quants to devise contingency plans.
  5. Automation and Efficiency:

    • Automated Reporting: LLMs can automate the generation of detailed trade execution reports, saving time and reducing the risk of human error. These reports can provide insights into execution performance and areas for improvement.
    • Enhanced Communication: LLMs can facilitate better communication within trading teams by summarizing complex data and generating concise, actionable insights.

Challenges and Considerations​

While the potential benefits of LLMs in Execution Quant are substantial, there are several challenges to address:

  • Data Quality and Integrity: LLMs rely heavily on the quality of the data they are trained on. Ensuring clean, accurate, and comprehensive data is crucial for reliable outputs.
  • Model Interpretability: The complex nature of LLMs can make them difficult to interpret. Execution quants need to balance model performance with transparency to ensure trust in the system.
  • Integration with Existing Systems: Seamlessly integrating LLMs with current trading infrastructure can be challenging. It requires careful planning and collaboration between quants, data scientists, and IT professionals.
  • Regulatory Compliance: Adhering to regulatory requirements is essential in finance. LLMs must be designed and deployed in a manner that complies with all relevant regulations.

Conclusion​

The integration of Large Language Models into Execution Quant represents a significant step forward in the quest for optimal trading execution. By harnessing the power of LLMs, execution quants can gain deeper insights into market dynamics, develop more effective trading strategies, and improve overall efficiency. While challenges exist, the potential rewards make it a worthwhile endeavor. As the technology continues to evolve, its impact on quantitative finance will undoubtedly grow, paving the way for more innovative and effective execution strategies.

Embracing LLMs is not just a technological advancement; it's a strategic move towards staying competitive in an increasingly complex and fast-paced financial landscape.

Generative AI for End-to-End Limit Order Book Modelling

The application of Large Language Models (LLMs) in financial markets, specifically in execution quant, is a rapidly evolving area. One significant aspect of execution quant involves Limit Order Book (LOB) modelling, where generative AI models are being utilized to create realistic and predictive market simulations. This blog explores how LLMs and other generative models can enhance execution strategies through sophisticated LOB modelling.

Introduction to Execution Quant and LOB​

Execution quant involves optimizing the execution of large orders to minimize market impact and trading costs while achieving desired execution outcomes. A central element of this process is understanding and predicting the behavior of the LOB, which records all buy and sell orders for a particular asset, ranked by price level.

Generative AI in Financial Markets​

Generative AI, particularly autoregressive models and state-space models, has shown great promise in various domains, including finance. These models can simulate realistic market conditions and order flows, which are crucial for developing and testing execution strategies.

Autoregressive Models for LOB​

Autoregressive models predict the next state in a sequence based on previous states. In the context of LOB, these models generate sequences of order book messages, capturing the complex dependencies and interactions between different market participants.

State-Space Models​

State-space models provide a mathematical framework for modeling time series data. They are particularly useful for capturing the dynamics of LOB, as they can handle long sequences and maintain computational efficiency. The S5 architecture, for instance, excels in learning long-range dependencies and is well-suited for LOB data.

Tokenization of LOB Messages​

A novel approach in LOB modelling involves tokenizing order book messages, similar to how LLMs process natural language. This involves converting elements of order messages (e.g., order type, price, size) into tokens that the model can process. This tokenization allows the model to handle large numerical values and maintain the precision of order details.

Advantages of Generative LOB Models​

  1. Realistic Market Simulations: By generating realistic sequences of order book events, these models provide a high-fidelity simulation of market conditions.
  2. Improved Forecasting: The ability to predict future market states based on current and historical data helps in developing more effective execution strategies.
  3. Data Augmentation: Generative models can create synthetic data to supplement real data, enhancing the robustness of machine learning models used in execution algorithms.
  4. Market Microstructure Insights: Detailed modeling of order flow and market microstructure can uncover patterns and behaviors not easily detectable with traditional methods.

Case Study: Deep State Space Models for LOB​

In a recent study, a deep state space model was used to generate realistic LOB data. This model employed a structured state-space layer to process sequences of order book states and tokenized messages, demonstrating impressive performance in approximating real market data.

Key Findings​

  • Low Perplexity: The model achieved low perplexity scores, indicating its high accuracy in predicting the next token in the sequence.
  • Conditional Forecast Performance: The generated order flow showed significant correlation with real data, highlighting the model's ability to make accurate conditional forecasts.

Future Directions​

The application of generative AI in LOB modelling opens new avenues for research and practical applications. Future work could explore the integration of these models with reinforcement learning algorithms for trading and market making, as well as the development of more sophisticated tokenization schemes to capture additional market nuances.

Conclusion​

The use of LLMs and other generative models in execution quant, particularly for LOB modelling, represents a significant advancement in financial technology. These models offer powerful tools for simulating market conditions, improving execution strategies, and gaining deeper insights into market microstructure. As research progresses, we can expect to see even more innovative applications of generative AI in the financial markets.

References​

  • Nagy, P., Frey, S., Sapora, S., Li, K., Calinescu, A., Zohren, S., & Foerster, J. (2023). Generative AI for End-to-End Limit Order Book Modelling. arXiv:2309.00638
  • LOBSTER: Limit Order Book System - https://lobsterdata.com

MDX Blog Post

· One min read
Haoxue Wang
Founder of LLMQuant | Mathematics@University of Cambridge | HSBC | Microsoft Research

Blog posts support Docusaurus Markdown features, such as MDX.

tip

Use the power of React to create interactive blog posts.

<button onClick={() => alert('button clicked!')}>Click me!</button>

Long Blog Post

· 3 min read
Haoxue Wang
Founder of LLMQuant | Mathematics@University of Cambridge | HSBC | Microsoft Research

This is the summary of a very long blog post,

Use a <!-- truncate --> comment to limit blog post size in the list view.