Finance News

How Can Financial Services Firms Mitigate The Risks Of Generative AI?

Financial Services Firms

By Ian Watson

A recent Celent survey of 1,070 financial services professionals found that 44% of financial institutions (FIs) are currently experimenting with applying generative AI in their organizations; 31% have projects using this technology within their 2024 roadmap. It is a technology on the verge of being deployed at scale. 

Risk executives are scrambling to understand the myriad risks of putting genAI technology into production, and just starting to consider how best to mitigate those risks we are able to anticipate. Yet estimating the level of risk involved in adopting genAI is more challenging than for more familiar technologies, due to the fact that some of the adverse outcomes of deploying generative AI can be hard to grasp (e.g., new forms of model bias) or are completely new (e.g., data hallucinations).

The competitive implications of being left behind are grave; risk executives are loathe to slow technological progress without a clear threat with a knowable likelihood. Even if an FI does not implement any genAI projects itself, the firm is still exposed to new risks from external users of genAI (e.g., more effective phishing and social engineering attacks). 

The risks of generative AI to the financial services industry, whether from consequences of deploying genAI or from external threats, still fall under traditional categories of operational/IT, regulatory/legal, reputational, and security risks. But there are five risks particular to genAI that require distinct countermeasures. 

1) Mind the regulatory environment.

When a financial institution uses genAI, it may inadvertently violate regulations, depending on model complexity and the data used for training. This would be a concern, for example, if an FI builds its own genAI models, an employee enters clients’ personally identifiable information (PII) into the large language model, doesn’t mask that PII, then the PII unintentionally ends up being part of the firm-wide corpus of data. 

To avoid the external threat of regulatory violations, FIs need must establish guardrails to ensure their implementation of genAI is compliant with a variety of rules (such as those governing fair treatment and data privacy) while also monitoring and ensuring compliance with emerging AI regulations. Financial services firms will be well-served by engaging with regulators early on during their use of genAI for clarity into relevant guidelines. 

2) Respect intellectual property.

If genAI output includes protected content—intellectual property (IP)—without permission, the financial institution may open itself up to lawsuits. This could happen when an FI relies on genAI to write blog posts, only to discover that passages from copyrighted text were used, exposing the organization to a copyright infringement lawsuit from the original author. 

Precautionary measures can help avoid IP violations. These include implementing regular checks and balances to ensure the integrity of training data; making sure that genAI output is sufficiently transformed from the original so that it isn’t considered unauthorized derivative work; and implementing safeguards to make sure that trade secrets aren’t leaked through generative tools, if they’ve become part of the AI’s learned dataset. 

3) Guard against bias and unethical behavior.

Human biases and unethical behaviors exist; genAI may magnify them. When this happens, unfair or discriminatory decisions and business practices are all too possible. Examples in financial services may include relying on a credit decisioning model that discriminates against certain customer types; using a third-party credit scoring service without awareness of the third-party model’s bias; or an FI employee using genAI for illegal activities or to violate customer privacy.

FIs must implement measures to eliminate AI-induced biases from customer interactions. These include clear establishment of AI ethics guidelines and policies, effective employee training, controlled access to monitor and audit genAI usage, and periodic testing for biased output. FIs should also evaluate the ethics policies in their supply chain, ensuring that vendors meet the FI’s ethics and anti-bias policies.

4) Don’t fall victim to bad actors.

Bank customers receive a phishing email that convinces them to expose a bank password. An employee is manipulated into sending a wire to a fraudulent company. These are examples of external threats posed by nefarious actors that become more likely as genAI is used to create more plausible news, information, or deepfakes—all leading to reputational damage above and beyond the related financial loss.

Voice and image cloning techniques may be used for phishing, in which case threats carry an appearance of being authentic, mimicking customer or employee communications to bypass security and access sensitive information—leading to costly security breaches. Additionally, bad actors may be able to harness genAI to create sophisticated malware designed to steal users’ credentials and sensitive information. FIs must implement rigorous protocols—for employees and customers alike—to identify and manage these phishing threats.

5) Be alert for hallucinations. 

Hallucinations are plausible, but incorrect, responses provided by genAI. This false output—inaccurate, misleading, or entirely fictional—might misinform users and negatively impact decision-making. This happens when the model is able to use training data to generate contextually relevant responses, even if the information generated isn’t factual. The extensive regulatory requirements for FIs make hallucinations particularly risky, should they result in inaccurate output in anything ranging from a FAQ generated by a simple “tell me” prompt to complex client briefings. In an insurance underwriting scenario, for example, a flawed risk evaluation can create underwriting errors, incorrect risk pricing, or denying a client’s access to particular services. 

To safeguard the possible adverse outcomes of hallucinations, organizations should regularly monitor and validate AI outputs to detect and eliminate false patterns. Best practices include incorporating human reviews; designing effective prompts that serve as instructions to the AI model that help guide responses and quality output; and fine-tuning on narrow datasets with specific guidelines.

Mitigate to innovate

Mitigating the risks of genAI necessitates a collaborative effort: from the front to back offices, across IT and data teams, and with participation from functional areas including compliance, human resources, and risk. Financial institutions can begin by testing the degree to which their current risk guidelines and mitigation protocols can combat the additional risk presented by the use of generative AI. They can then develop and implement risk-specific mitigation procedures and tools to close any gaps, such as those that may impact operational efficiency or customer engagement. The risks and returns of genAI may be uniquely challenging to estimate, but the effort to do so will be essential for organizations that are committed to innovation.

 


About the Author:  Ian Watson is head of risk research at Celent, a global research and advisory firm focused on technology and business strategies in the financial services industry. Ian joined Celent after more than 20 years in the technology industry, consulting banking and insurance clients. He has an MBA from Columbia Business School and a Bachelor’s degree in economics from Columbia College.

Comments
To Top

Pin It on Pinterest

Share This