AI in Investment Management: Mapping Comfort Zones, Constraints and the Path to Alpha

Abstract blue design

Author: Jonathan Philp, CFA

On Tuesday 31 March 2026, the Warwick Business School Gillmore Centre for Financial Technology, CFA Institute and CFA UK hosted an event headlined ‘AI in Investment Management: Mapping Comfort Zones , Constraints and the Path to Alpha’. The event was moderated by Dan Philps, CFA, Head of Investment Strategies, Rothko.

The purpose of the event was to provide real-world perspectives on the practical adoption of AI technologies in the investment process. Four speakers gave their perspectives on the opportunities and challenges posed by emerging tools and practices.

Julan Al-Yassin, CFA

Julan Al-Yassin, CFA, Director, Learning Content, CFA Institute, reviewed the implications of AI adoption for the CFA Institute’s Code of Ethics and Standards of Professional Conduct. In a recurring theme, it was observed that existing principles must continue to be applied; Julan referred to Standard V.A. (Diligence and Reasonable Basis) in particular, and also Standard I.C. (Misrepresentation) and Standard III.E. (Preservation of Confidentiality) as examples where the current standards address potential risks arising from the use of AI tools in the investment process.

Julan noted that the CFA Institute is actively considering where specific extensions to the current standards may be needed. A draft Governance is in preparation which will address how investment professionals should interact with AI models and offer guidance on disclosure, documentation and audit practices. 

Julan also discussed the need to prepare investment professionals to operate successfully as AI adoption accelerates. The CFA Institute and societies across its network are developing educational resources for their members across a broad range of technical and governance issues, and these topics will also feature in the CFA curriculum in the future. 

James Hadfield

Senior risk leader James Hadfield discussed AI adoption from the perspective of a Chief Risk Officer. Given the pace of change and the proliferation of models, due care is needed in the selection, deployment and operation of AI tools. A measured approach is needed as the regulatory framework hardens across AI specifically but also the closely related themes of data privacy and operational resilience. Good governance should support exploration and innovation, but haphazard adoption creates the risk of fragmentation of the technology stack and diminished transparency. A sound governance framework involving business, risk, technology and compliance stakeholders is key.

Once again, sound, established risk management principles continue to apply, but the pace of AI adoption and the nature of the technology creates specific challenges. James noted the particular need for attention to supplier due diligence, as many AI vendors innovating in the space are start-ups and may themselves not have mature governance frameworks in place that meet the standard required by the tightly regulated financial services industry. The potential for AI processes to be injected through third-party products and services must also be considered. 

AI is provoking behavioural change as practitioners respond to hype and fear of missing out. Ownership and accountability is paramount, and existing regimes such as the UK Senior Managers & Certification Regime and Consumer Duty provide frameworks for the design and application of AI governance. 

James’s key recommendations were to ensure that AI use cases are defined clearly and their scope managed, that AI risk management should be integrated into existing risk frameworks and suitable safeguards such as stress testing and guardrails should be implemented. The upskilling of the user base, both in terms of capability and responsible use, is also essential. 

Simon Legrand-Green, CFA

Simon Legrand-Green, CFA, Head of Multi Asset and Systematic Strategies Research, WTW explored some investment management AI use cases and discussed how investment consultants view the opportunities and risks that arise. 

AI is being used to generate synthetic data to fill gaps in official data series by interpolation from alternative sources, in response to, for example, the US government shutdowns. This synthesis of market data can be extended to model market events and to train AI models, though Simon suggested that outcomes to date have been mixed. 

Another application is signal generation, where AI tools are sued to enhance and extend traditional screening approaches. The use of Large Language Models offers the potential for the systematic treatment of qualitative as well as quantitative data. Simon gave the example of analysis of tonal differences between prepared remarks and responses to questions in earnings calls. 

In portfolio construction, new models are usually applied gradually until confidence in performance and calibration builds, but the alpha generating capability of new models is observed to decay quickly. AI is being used to accelerate the back-testing and validation of new models to maximise alpha capture.

Finally, Simon noted a growing range of applications in trade management, with AI tools used to accelerate the analysis and selection of trading parameters. In the manager selection process, Simon guided that consultants will look for transparency and clarity in investment AI use cases, as well as solid model validation and governance practices.

Carlos Salas, CFA

Carlos Salas, CFA presented an AI use case which he delivered for a smaller investment firm. He noted that such firms face particular challenges in experimenting and implementing AI. Across the industry, firms face margin pressure and the greater part of technology budgets are consumed by legacy architecture costs, leaving little for investment in innovation. AI comes with high expectations but often delivers poor return on investment with productivity gains slow to materialise, a situation Carlos describes as ‘experimentation hell’.

Carlos described the selection of tools and the proof of value for a red flag analysis use case. In the current state, an investment analyst performs a periodic screen for business-critical red flags across a portfolio of micro- and small-cap stocks. Challenges include limited capacity, and low frequency of updates given the need to seek out information.

The objective of the project was to augment, not replace, human capacity, increasing the frequency and intensity of the analysis and tackling potential human biases. By injecting an AI agent into the process, the processes of gathering news, filtering and LLM-supported red flag analysis could be compressed and the results served up for final human decisions to be taken. Fine tuning reduced the frequency of false positive signals, and guardrails were put in place in the form of static rules. Importantly, given the emphasis on the principle of Human in the Loop, a user-friendly web app interface and an Excel plug-in were provided. 

The project achieved its objectives; the analysis can now be carried out daily with time required falling to half an hour from over 40 hours in the pre-AI state. The selection of open-source technology components was driven by the need to balance cost and complexity, and the project has established a foundation for future AI investment. Carlos recapped themes noted by preceding speakers, particularly the importance of clear objectives, due consideration for  governance and the need to upskill human capital to make best use of the potential of AI tools and practices at scale.

 



This well-attended event emphasises the enthusiasm for the potential of AI in the investment process, and our excitement at the fast pace of change. The focus of AI innovation includes well-understood quantitative machine learning practices and increasingly the use of Large Language Models for qualitative analysis and reasoning and the deployment of autonomous AI agents into business processes. The event title ‘Path to Alpha’ articulates well the journey from theoretical potential to practical, scalable and defensible implementation. I took away the following thoughts.

 

  • The rapidity of AI adoption and the evolution and proliferation of models and tools is bewildering and care is needed to define and execute a successful AI strategy for the investment firm. Use cases should be defined tightly and scope creep resisted. Thoughtful evaluation of models and tools is needed against a backdrop of tightening regulation across AI adoption, operational resilience and data privacy. 
  • AI is driving significant disruption to established suppliers of technology products and services to the investment industry. The need for careful due diligence when interacting with innovative start-ups was highlighted, and vendors themselves should consider how to improve their success rate by understanding and catering to the compliance obligations that financial services firms face. 
  • Requirements for transparency and clarity are clear constraints on the use of AI in the investment process. We have a long way to go before autonomous AI agents displace human judgement. However, the promise of productivity gains and the acceleration and intensification of a whole range of analytical and operational processes can be realised. Outcomes must be measurable to prove return on investment of course. 
  • Governance is critical. A good governance process should encourage exploration and innovation while providing a clear framework for the identification and management of risks across the definition, implementation, validation and operation lifecycle. The pace of change cannot be an excuse to cut corners. Well-established, sound risk management principles can readily be applied to AI adoption. Good practice puts Humans in the Loop and accountability frameworks such as SM&CR put Humans on the Hook. 
  • Upskilling is essential if the gains from AI are to be captured. Investment professionals need to develop the capabilities select and interact with AI models and tools and to fulfil effectively the critical Human in the Loop role.

Carlos made the point that AI tools should also be designed with human use in mind. Here academic and professional institutions such as WBS, CFA Institute and Societies like CFA UK have a critical role to play supporting the industry through the AI revolution. 

 

ABOUT THE AUTHOR

Jonathan Philp, CFA is a Principal Specialist in the Banking & Financial Markets practice at NTT DATA - a global technology services firm. His role is to support clients in the definition and delivery of innovative technology solutions. 

Prior to NTT DATA, Jonathan worked in a variety of institutional investment management and asset servicing contexts and for technology solution vendors and service providers. He has held a range of
programme, project and product management roles. In the early part of his career, he was an equity analyst focusing on the global Technology sectors.

Jonathan holds an MEng in Electrical Engineering from Imperial College and a MA in European Political & Economic Integration from Durham University. He has been a CFA Charterholder since 2002.

Related Articles

Oct 2021 » Analysis

Man+Machine – The evolution of fundamental scoring models and ML implications