Artificial Intelligence and Fraud
Will AI be our biggest friend or biggest foe?
FUTURE PROOF – BLOG BY FUTURES PLATFORM
Artificial intelligence gets its fair share of bad rap, usually in the form of dire warnings. But AI is a tool, and whether it is used for good or evil, depends on who is using it. In this article, we look at AI and fraud, showing that AI can both help us protect against it and help those who want to commit it better.
ARTIFICIAL INTELLIGENCE AND FRAUD
According to the Insurance Information Institute, the most recent numbers show that in 2017, in the US, 16.7 million people were victims of identity fraud. That was a record compared to the previous year, which was also a record. In total, USD 16.8 billion were stolen – not a small figure.
But it’s not just individuals who are victims of fraud. According to PwC, 49% of global companies say they have been a victim of “fraud and economic crime” in the previous 24 months. Despite that, 22% said they were using some sort of pattern-recognition technology and getting value out of it. In addition, 11% said they were using AI and also deriving value from it.
Still, the prevalence of fraud, whether it is targeting individuals or companies, is only too obvious.
And now it seems that AI is beginning to be used to aid fraudulent activities. How?
An article at The Next Web provides us with a few examples. For instance, researchers at the University College London developed an algorithm that can replicate your handwriting better than anyone else. Or we can talk about fake conversations, where chatbots can mimic your speech or writing patterns in a way that any difference between the copy and the original becomes almost impossible to discern. This gets worse when you add to it voice forgery or fake videos, which we have also discussed in the past. Both of these could usher us into an era where it is hard to believe in anything we see or hear.
Then we have DeepLocker, developed by IBM, that uses artificial intelligence to deliver targeted malware. This AI only releases the malware when certain parameters are met, such as when it identifies the target through geolocation, or face, voice, or speech recognition, for example. This makes it extremely difficult to capture it, analyse it, and fix it, since due to the difficulty in understanding how the neural network makes the decision.
But surely, if AI can do all of this, Can it also help us protect against fraud?
The answer is yes.
In fact, using AI to combat fraud is becoming essential. It is used to detect fraudulent activity almost immediately (sometimes within seconds), allowing us to stop it almost as it begins. We also know that companies like MasterCard have been using AI systems to detect fraudulent transaction patterns and card fraud for a long time, now.
Another example is Eno, an AI-powered virtual assistant offered by Capital One. When the system finds some unusual pattern and flags it as possible fraud, Eno immediately gets in touch with the customer who owns the card. By asking questions about whether this person is aware of a certain transaction or whatever else has happened, it can make a quick decision on whether fraud was indeed committed or not.
AI can help us fight fraud in several ways. For one, it’s much better than us at recognizing patterns and outliers in vast pools of structured and unstructured data – not only is it more accurate, it is also much faster. On top of that, especially when it comes to banking, it can help institutions reduce the number of false positives. That’s important because, besides being extremely inconvenient for customers, and possibly a deal breaker, reducing the number of times something is identified as fraud, when it is not fraud, can save time and money. Finally, it can help us take action immediately, by freezing assets, alerting someone, and so on.
Going forward, we can be sure that AI security systems will continue to improve. This means fighting fraud will become increasingly more reliant on automated technologies. Nevertheless, we must take heed as AI can also be used to commit fraud.
So is AI a fraudster’s best friend or enemy? As the opening paragraph said, it depends on whose hands it falls into. Though the scale seems to be tilting more towards the enemy proposition.