Generative artificial intelligence (AI) has already had a pervasive impact on our lives, with many experts sharing their opinions on the technology’s industry changing commercial potential and how it might be used in future. I recently shared my
thoughts on how the technology would change the payments industry, but while there are many positive applications for generative AI in our sector; there are, unfortunately, also many people who are looking to use the technology for fraudulent purposes. Here are a few ways fraudsters are already using generative AI today that colleagues in the payments industry should be aware of:
More sophisticated phishing threats. We are already seeing more sophisticated attacks being launched. Indeed, as recently discovered by
Cyble, fraudsters are even creating fake websites that offer free downloads of the ChatGPT tool. This type of hyper reactive approach is typical of fraudsters, who will often take advantage of new trends to attract people’s attention. In this case fraudsters are offering fake ChatGPT downloads as cover for installing malware on their victim’s device or machine, which will enable them to access personal data, passwords, or even cash.
This example may not be using generative AI itself, but the technology is also boosting the effectiveness of these types of scams by improving a fraudster’s ability to compose genuine looking fake emails, websites, and other forms of communication much more accurately. Fraudsters can generate bonafide emails by using generative AI models that have been trained using officially produced information to better mimic the approach, language, grammar and even how links are managed in these emails, to make their phishing attempts appear more legitimate. The more convincing the fake, the higher success rate. If a fraudster gains access to someone’s email, they can then phish information, deploy malware, or cause further damage by training their tools on real emails, so that any future communication more closely resembles the real thing.
Sensitive data collection. Fraudsters already use huge libraries of tools for collecting sensitive information, but now they are using generative AI powered scams to improve their ability to gain access to, and then take over social media accounts. Taking over an account enables a fraudster to target a victim’s contacts in a trusted environment, where they may have the opportunity to ask a victim’s close friends or family for money in a synthesised, Authorised Push Payment attack. They may also try to use social engineering techniques to access further sensitive information, hinting at possible passwords and secret answer questions used for online banking log ins, for example. In each scenario, generative AI is constantly learning the victim’s communication style, which makes any future attacks even more effective and harder to detect.
Fraudsters are also using generative AI to generate and post extremely convincing job applications to official job boards. CVs contain highly personal information and scams like this are typically used to target vulnerable people, especially those who have been made redundant, or are just starting out in their career, and are more likely to give out personal information to recruiters. Upon successfully ‘hiring’ an employee, fraudsters could even ask for their victim’s ID and bank details, knowing it would take at least a month before payday– ample time to clone the identity and use it.
Synthetic identity generation. Synthetic identity fraud is already a major problem for many banks and financial institutions, but fraudsters are using generative AI and synthetic data to help them to bypass verification checks. Fraudsters now have the ability to generate impressive fake identity documentation, which is realistic enough to pass many of the standard checks involved in creating a bank account, for example. Once a false bank account has been granted, fraudsters can apply for credit, or to take advantage of buy now, pay later offers, or worse.
Generative AI is helping to make this sort of fraud easier, quicker, and more scalable. Fraudsters can launch multiple fraudulent attacks with the click of a button. Automating the process of producing fake identities and personas, and dramatically increasing their chances of success. Each attack adds to a fraudsters’ large corpus of real data, which improves their model’s ability to train itself and boosts the accuracy of any subsequent attack. Some fraudsters are even using generative AI tools to generate synthetic identity data for sale on the dark web, as opposed to using it themselves.
Detailed responses to fraud investigation questions. Conversational AI’s ability to respond to questions in a similar way to most humans poses a unique problem for fraud investigators, and generative AI is enhancing fraudsters’ ability to develop models that act and sound like a real person. Those models can also learn from and quickly understand the context and history of any conversation, improving a fraudster’s ability to pass more stringent verification checks, for example. Hypothetically by taking advantage of a stolen identity, generative AI could be utilised to make a fraudulent insurance claim, then be used to argue the case.
Exploitation of major disasters. Fraudsters have always tried to take advantage of natural disasters, such as the recent earthquakes in Turkey and Syria, but these attacks have become more sophisticated with the utilization of AI image generation. Fraudsters are using generative AI to create fake images of real-world situations as backgrounds for fake charitable pages. The recent earthquake quickly
attracted the attention of fraudsters, who began setting up fake donation pages on TikTok and other channels. Many of these pages used AI to generate fake images and produce convincing messaging to coax money out of their victims.
Generative AI is a very powerful tool for fraudsters and it’s clear all fraud case investigations, credit applications, and insurance claims will now require a much higher level of scrutiny. Many firms, including OpenAI, are launching
tools to detect AI generated content which could be instrumental in detecting generative AI’s use in fraud.
Rapid education on the nefarious uses of generative AI and warning businesses and individuals about the threats is now essential for protecting the payments industry against this powerful new technology.