GenAI can help compliance officers detect potential fraud, and enable them to focus on high-risk cases. As many challenges as there are in using this new technology, the opportunity to use them far outweighs the risk, as Shield VP Data Science Shlomit Labin describes in this interview.
How does GenAI help compliance officers detect potential fraud within communications?
GenAI can be trained in Natural Language Processing (NLP) to interpret the intricate aspects of human language, allowing it to grasp implicit meanings in communications. Advanced GenAI systems have the capability to reveal valuable hints, serving as a starting point for investigations and aiding in focusing efforts amidst the vast volume of transactional data.
We rarely say exactly what we mean – rather there is always something to read between the lines. GenAI can analyze various communication platforms, including both spoken and written forms to see insight that may be harder to extract without it. It can meticulously examine every interaction among traders, comprehending the underlying context. This enables the identification of subtle language cues that may indicate suspicious activity, such as intentionally ambiguous references or exceptionally broad statements. By integrating language comprehension with contextual understanding, these platforms can evaluate potential risks, correlate them with relevant transactional data, and subsequently highlight dubious interactions for further examination by human experts.
How does this make the work of investigators, analysts and other fraud prevention professionals easier?
Data growth is no secret to any professional in IT, cybersecurity or compliance – in 2010 only two zettabytes were used globally, in 2022 we clocked in at 97 and are projected to hit 181 zettabytes in 2025. The immense amount of data and alerts organizations have to handle is overwhelming. GenAI platforms, however, effectively address this issue by significantly reducing alert fatigue. They achieve this by minimizing the sheer volume of data that humans need to sift through, allowing professionals to concentrate solely on high-risk cases.
Moreover, GenAI platforms provide fraud prevention teams with the capability to ask questions using natural language. This empowers teams to work more efficiently, eliminating the constraints imposed by one-size-fits-all, pre-determined questions utilized by traditional AI tools. By being able to comprehend a wider range of open-ended questions, GenAI platforms enable investigators to derive immediate value from them. They can ask broad questions and subsequently delve deeper with follow-up inquiries, all without the necessity of initial algorithm training.
What are the major challenges of using GenAI for communication compliance purposes?
A significant drawback of GenAI solutions in the financial services industry is their predominant availability through application programming interfaces (APIs). This setup prevents the analysis of potentially sensitive data on-premises, where it could benefit from the security measures approved by regulations. Although there are on-premises versions of these solutions available to mitigate the issue, many organizations lack the necessary in-house computing resources to deploy them.
Another prominent challenge in leveraging GenAI-powered fraud detection and monitoring in the financial services sector is establishing trust in the generated results. The potential of GenAI is still in its early stages, and some perceive it as a “black box” where even the creators don’t fully comprehend how it reaches its conclusions. However, this portrayal is inaccurate. It’s not unexplainable, it’s just complex. This is where data scientists and experts have to do more work at explaining uses and usage.
How can investigators or analysts responsibly use GenAI in detecting fraud?
To establish trust in GenAI among financial services regulators, it is crucial to prioritize transparency and explainability. Platforms should aim to demystify the decision-making process by providing clear documentation on the architecture, training data, and algorithms employed by each AI model. Additionally, they should develop methodologies that enhance explainability, incorporating interpretable visualizations and emphasizing key features, limitations, and potential biases.
For financial services analysts, building trust can be achieved through comprehensive training and education. This involves explaining how GenAI functions and delving into its potential limitations. In addition, fostering trust in GenAI can be facilitated by adopting a collaborative human-AI approach. Instead of treating GenAI systems as mere servants, it is more effective to view them as tools in a fraud detection framework. Emphasizing the synergy between human judgment and AI capabilities is essential, rather than relying solely on AI and its judgments.
Dr. Shlomit Labin is the VP of Data Science at Shield, which enables financial institutions to more effectively manage and mitigate communications compliance risks. She earned her PhD in Cognitive Psychology from Tel Aviv University.