What are the opportunities and challenges of AI in the fraud prevention and identity verification space? We caught up with Heidi Hunter, Chief Product Officer for IDology, a GBG company, to find out.

IDology delivers a comprehensive suite of identity verification, AML/KYC, and fraud management solutions to help businesses drive revenue, deter fraud, and maintain compliance. Founded in 2003, IDology made its Finovate debut in 2012. GBG acquired the company in 2019.

Ms. Hunter joined GBG Americas in 2011 and has worked in both product innovation and customer success roles during her career with the company. She brings more than 13 years’ experience in supporting customers and helping them with their business needs through product innovation, support, and implementation roles.

Currently, Ms. Hunter is responsible for driving the company’s product roadmap and bringing new innovations to the identity verification market through strategic product development.


AI has brought on challenges and opportunities when it comes to fraud and financial crime. What are the principal challenges financial institutions are facing?

Heidi Hunter: There are four main areas of concern: cybersecurity and fraud, biased models, human oversight, and regulatory compliance.

Deloitte has written on the growing concern of AI as a cybersecurity and fraud threat, noting that 51% of executives interviewed believe that the cybersecurity vulnerabilities of AI are a major concern. One issue is the problem of more and better fake documents. AI will simplify creation of passports, driver’s licenses, and ID cards that are virtually indistinguishable from genuine ones. Another issue here is increased synthetic identity fraud. Generative AI is a productivity tool for fraudsters, creating highly realistic synthetic identities at scale.

Additionally, there is more effective phishing and social engineering. A recent study of 1,000 decision makers found 37% had experienced deepfake voice fraud. And Generative AI is used to fuel a surge in phishing tactics.

You also mentioned biased models, human oversight, and compliance.

Hunter: The use of AI and machine learning (ML) algorithms have come under scrutiny with concerns over data bias, transparency, and accountability. With regard to human oversight, 88% of consumers reported that they would discontinue a helpful personalization service if they didn’t understand how their data would be managed.

Lack of human oversight is also a regulatory concern. AI often lacks transparency, leaving businesses exposed when they must explain their decisioning, which has brought expectations of future regulation. AI-generated deepfakes are moving fast and policymakers can’t keep up.

Can the same technology that’s enabling fraudsters also enable FIs to thwart them?

Hunter: Yes, especially when AI is paired with human intelligence. AI benefits from experts charged with overseeing incoming and outgoing data. A trained fraud analyst accompanying AI-based solutions can catch new and established fraud trends. This includes novel threats that AI solutions on their own may miss.

From a compliance perspective, this means businesses can offer a more transparent solution and manage potential bias. Supervised AI can eliminate the need to manually verify an ID, and help provide the explanation needed for compliance and regulatory requirements.

Automation plays a major role in AI. So does human oversight. Can you talk about the relationship between AI and automation?

Hunter: Automation is typically rule-based and follows predetermined instructions, while AI can learn from data and make decisions based on that data. In other words, automation software operates on a set of predefined rules, while AI can make predictions and decisions based on the data it is presented with. The ‘predictions’ aspect of AI- and ML-based tech is where human supervision plays such an important role.

What is the proper balance between human oversight and AI? What role do humans have in an increasingly AI-powered world?

Hunter: Like with any tool, human-supervised AI is great when it’s one part of a larger identity verification (IDV) strategy.

Humans have a role at every ‘stage’ of AI use or implementation: in development, in terms of what data is being used to train a model; during deployment, where an AI-based tool is used and to what degree; and when it comes to holding AI-based tools accountable. This means analyzing a given output and what decisions a FI makes based on that output.

For identity verification specifically, how has human-supervised AI helped solve problems?

Hunter: Consumers also set the bar high for seamless interactions. For example, 37% of consumers abandoned a digital onboarding process because it was too time-consuming. Overcoming this challenge requires a comprehensive strategy. Human-supervised AI can play a critical role in the process, as it can quickly scrutinize vast volumes of digital data to uncover patterns of suspicious activity while also providing insight and transparency into how decisions are made.

Are businesses embracing human-supervised AI? What hurdles remain to broader adoption?

Hunter: Yes, because while there is a lot of excitement around what AI can do, several businesses and people in the academic community believe AI isn’t ready to make unsupervised decisions. As mentioned earlier, businesses show concern over AI operating on its own. Concerns range from ethical questions, to cybersecurity and fraud risks, to making a bad business decision based on AI. On a positive note, businesses are becoming more aware of benefits of supervised learning models.


Photo by cottonbro studio

The post AI and the Fight Against Fraud: A Conversation with IDology’s Heidi Hunter appeared first on Finovate.

Source: https://finovate.com/ai-and-the-fight-against-fraud-a-conversation-with-idologys-heidi-hunter/