Artificial Intelligence

Unmasking AI: how artificial intelligence could be used for - and against - fraud

14th September 2023
Sheryl Miles
0

Advancements powered by AI are coming thick and fast. Not only are tangible results being unveiled regularly, but artificial intelligence has gone mainstream, frequently dominating the news cycle and being utilised by more and more people in everyday life.

It’s undoubtedly an exciting period of time for those working in healthcare, data science, and the countless other areas AI is bolstering, too.

However, rapid progression brings its own set of challenges. For one, the upcoming EU AI Act is rumoured to be proving tricky to define, with the landscape of what’s possible changing so frequently. There’s also the burning issue of misuse. For all its countless benefits, new tech is often exploited for nefarious means, usually due to the lack of regulations at its inception.

Take the early days of the internet, for example. Even before the World Wide Web debuted and the internet was primarily used by academic researchers, the first major cyber attack was carried out by Cornell grad student Robert Morris in 1988. Labelled the ‘Morris Worm,’ it heralded the first felony conviction in the US under the 1986 Computer Fraud and Abuse Act and triggered a technological arms race between hackers and developers for years to come.

AI finds itself standing over a similar precipice in the present day. With the growing use of deepfake technology and increasingly sophisticated scams, how do those at the forefront of the ethical AI movement minimise fraudulent activity and, ultimately, harness it for good?

Increasing sophistication

AI scams are exploding – and so is the number of people falling prey to them. With the ability to replicate video or voice notes to resemble loved ones, criminals are taking advantage. In fact, one in four of those surveyed have fallen victim to an AI scam, according to a study by McAfee. Most significantly, the survey also found that 70% of respondents weren’t confident in telling a fake AI-generated voice from a real one.

As AI evolves, scams are starting to take many forms. Perhaps the most high profile to date has involved financial journalist and broadcaster Martin Lewis in 2018. Thanks to his Money Saving Supermarket website, he is a trusted source for financial advice – which also makes him a prime target for cybercriminals. His likeness was replicated as part of a deepfake video on Facebook, in which he appeared to urge consumers to part with their money. Lewis took, and later dropped, legal action against the social media giant for defamation, who admitted thousands of similar scam videos were on the site unchecked.

With watertight regulation not yet in place, who should bear responsibility? Legally, it’s not immediately clear. It’s a reminder that lawmakers must move quickly to remove the confusion that criminals can capitalise on.

Unexpected consequences

Even though you’d hope AI would only be utilised for positive means, its use by criminals does have some unexpected benefits for developers. Much like the ‘Morris Worm,’ new threats from criminals force innovators to act quickly, finding rapid solutions and advancing AI even further in order to snuff out danger. This then adds a new layer of protection for the general public too.

Like it or not, the overall growth in expertise in the market also increases as both developers and criminals battle for supremacy. New skills are acquired on both sides with some using their illegally-gained knowledge for good, like the infamous Kevin Mitnick who became a white-hat hacker in later life.

Technological cycles come and go. But artificial intelligence is different because of the unprecedented level of open-source software and sharing of techniques on offer. This, of course, can be dangerous. However, it also encourages a collaborative approach; key to tackling issues such as fraud.

Fraud prevention

Ironically, despite its misuse, AI is also becoming more and more effective at preventing fraud too. This is particularly true in banking. In a 2022 report, Juniper Research predicted that the global business spend on AI-enabled financial fraud detection and prevention strategy platforms will exceed $10 billion globally in 2027.

Banking institutions are already taking steps to reflect those projections. Earlier this year, Mastercard introduced Consumer Fraud Risk, an AI-powered preventive solution which has since been picked up by nine UK banks. Cybersecurity allows financial institutions to trace and stamp out fraudulent activity far earlier than before, thanks to specific analysis factors. These include account names, payment values, payer, and payee history; detection is triggered much quicker than manual operation.

With TSB adopting the technology at an early stage, it’s estimated that almost £100 million in scam payments would be saved across the UK if all banks performed similarly. Evidently, AI holds the key in stopping itself from being exploited by fraudsters.

The bottom line

Clearly, there is still so much ambiguity surrounding artificial intelligence, and those grey areas are being seized upon by criminals. Ideally, the upcoming EU AI Act will address many of these challenges – but it’s a big ask. 

In the meantime, it is down to those in the ethical AI movement to find innovative solutions, prioritising collaboration and openness in their goals. Armed with preventive measures and greater education for the general public, only then will developers begin to curb the number of victims falling foul of fraudsters.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier