Emerging Role of AI in the Financial Services Industry
While the concept of artificial intelligence (AI) has been around since the 1950’s, 2023 was the technology’s breakout year. This had much to do with the explosion in popularity of OpenAI’s ChatGPT. Now relied on throughout a range of industries, ChatGPT and other Large Language Models (LLMs) from competing vendors are used extensively in both personal and professional capacities because of their versatility and endless list of use cases.
The financial industry has also picked up on the LLM trend, as the technology offers various applications within finance, including assisting with anything from software development to data analysis to regulatory compliance– and everything in between. But how can practitioners ensure the sound and responsible use of AI technology in finance? Below, we’ll explore this pressing topic.
Establishing Controls around AI
Because generative AI (genAI) technologies like ChatGPT are emerging rapidly, establishing rigorous controls for managing risk is a major concern for the financial industry. Unfortunately, this task is not necessarily clear cut at present. Understanding the inner workings of these technologies from a technical perspective is difficult, as the models are incredibly large, complex, and opaque. Explaining their outputs adequately, especially for high-risk applications, must be done carefully.
However, the industry is trending in the right direction in terms of determining how to use AI models ethically, soundly and reliably across the various use cases found in financial services. Global financial regulators have already begun taking steps to develop guidance for market participants on the responsible use and regulation of AI.
In fact, a Risk.net article reported that IOSCO is gearing up to do ‘intensive work’ in this area. According to the article, Iosco is drawing up plans to identify inherent AI risks. But it will take some time for regulators to fully understand how AI is going to be used in the financial sector and to issue proper guidance.
Once the proper controls are in place, there is extensive room for AI to be used for mission critical functions, such as improving the experience of clients and optimizing internal business functions, among others.
Will humans become obsolete?
Many of us might be thinking, will AI ever be completely autonomous in terms of carrying out key tasks in quantitative finance, such as analyzing market data or making predictions? Or will there always be an element of human intervention needed?
In a recent Numerix podcast, Prag Sharma, the Global Head of Artificial Intelligence at Citi's AI Centre of Excellence, weighed in on this issue. His view is that, for the foreseeable future, it will continue to be necessary for humans to stay in the loop for high-risk actions and applications. AI is not, at this point in time, equipped with sophisticated enough reasoning capabilities that would enable its autonomous use in the capital markets. It is more likely in the foreseeable future that genAI will be used in tandem with traditional machine learning to provide enhanced quantitative insights, for example.
When engaging in complex tasks in finance, robust oversight will continue to be necessary, providing practitioners a level of comfort that both their firms need and that the industry needs. For low-risk functions, such as asking AI to rewrite emails or summarize reports, it is perfectly acceptable for AI to be used as a helpful tool.
ChatGPT and Data Handling
By nature, financial firms possess an abundance of proprietary data. But how does this work in the context of using LLMs such as ChatGPT?
It’s important to remember that every technology possesses inherent limitations, and ChatGPT is no different. For example, the information that it produces is not infallible, making it unsuitable for market predictions or making key decisions. What’s more, being able to clearly and thoroughly articulate a request (also known as prompt engineering) will generally provide the best outcomes when using this tool.
Equally important to emphasize is that ChatGPT should not necessarily be trusted as a sole research source due to its tendency to generate inaccurate information, or “hallucinate.” For this reason, users need to ensure they verify all outputs provided by the application. It's also imperative to avoid submitting a company’s proprietary data or information into ChatGPT prompts or conversations, especially for the freely available version, as these inputs might utilized for future training of the LLM and eventually become accessible to other users.
Get a Deeper View into AI in Finance
Listen to our podcast for deeper conversation on the rapid integration of AI into the financial sector: A Deep Dive into the Current and Future Role of AI in Finance with Prag Sharma, the Global Head of Artificial Intelligence at Citi's AI Centre of Excellence. Listen in as Prag and host, James Jockle, unpack the implications of AI in finance, from strategy to ethical usage, and the urgency for an expert ecosystem.
This blog is part one of a three-part series digging into the role of AI in finance. Look out for upcoming posts exploring additional facets of this fascinating topic.