The rising threat of AI in e-commerce fraud – and how to combat it
Many innovations designed to make life easier for your customers are currently being weaponised by fraudsters.
Fraudsters are no longer just using stolen card numbers to bypass fraud controls; they are now deploying sophisticated methods utilising artificial intelligence (AI) to bypass traditional security hurdles. And the more AI technology evolves, the more difficult it becomes to detect.
From generative AI creating 'Frankenstein' synthetic identities to deepfakes that bypass biometric security, the tools of 2026 are double-edged swords. For the UK e-commerce sector, this has created a 'capability chasm' where smaller merchants are being left exposed. It is time to stop viewing fraud as a technical glitch and start seeing it for what it is: a systemic threat to the digital ecosystem.
According to data from the Crime Survey for England and Wales (CSEW), fraud incidents saw a 31% year-on-year increase last year. Within this surge, bank and credit account fraud rose by 30%, while consumer and retail fraud jumped by 23%.
With over 4 million incidents reported annually, the conversation around fraud is moving away from 'how do we fix the system?' and toward 'how do we protect people?'
The new face of fraud: When AI gets personal
Yesterday’s scammers relied on generic phishing emails and basic 'carding' techniques. Today, any new tool developed for convenience is almost immediately weaponised by criminals.
1. Synthetic identity fraud and 'Frankenstein' accounts
Criminals are using generative AI to weave together synthetic identities - part real, part fabricated. These personas build credible histories over months, making them nearly impossible to distinguish from genuine customers during onboarding. AI and deepfakes can even bypass biometric checks, providing 'live' faces that legitimise stolen or fake IDs.
2. Social engineering at scale
The modern fraudster doesn't just hack a system; they hack human psychology.
3. The industrialisation of fraud
The image of a lone hacker is outdated. Today, Organised Crime Groups (OCGs) use cloud-based infrastructure to run high-volume, high-speed scams. Whether it’s 'romance scams' over WhatsApp or 'investment scams' via Instagram ads, these operations are professionalised, borderless, and relentlessly efficient.
In a landmark 2024 case, an employee at Arup was tricked into transferring £20 million after attending a video call filled with deepfake avatars of the company’s CFO and other colleagues. It’s a stark reminder: in 2026, seeing is no longer believing.
Why rules-based systems are failing
A significant 'uncomfortable truth' in the current landscape is the widening gap between large and small organisations. For years, e-commerce platforms relied on static rules; for example, 'If the IP is from a different country and the order is over £500, flag it.' Modern AI fraudsters easily mimic 'normal' behaviour to stay under these thresholds. This creates a competitive paradox:
In 2025, e-commerce platforms saw a surge in ‘refund-only’ fraud. Buyers used AI tools to doctor photos of perfectly good products, adding realistic ‘mould’ to fruit or ‘cracks’ to electronics, to claim instant refunds without ever returning the item.
AI vs traditional rule-based fraud detection
Retail giants like M&S have had to bolster defences against ‘inventory denial’ bots. These AI-driven scripts flood sites and add thousands of items to baskets without checking out. This makes products appear ‘Out of Stock’ to genuine customers, damaging brand trust and driving shoppers to competitors.
How to combat AI fraud: A multi-layered defence
To protect your margins and customer trust, a proactive, ecosystem-wide approach is now essential.
1. Prioritise transaction monitoring
For smaller businesses with limited budgets, transaction monitoring is the first line of defence. While basic onboarding screening is useful, it rarely catches a sophisticated synthetic ID. Robust monitoring identifies red flags after the fact, such as a 'good' account suddenly making multiple small, rapid-fire transactions or frequent chargebacks.
2. Implement behavioural biometrics
AI can steal a password, but it struggles to replicate the subconscious way a human interacts with a device. Analysis of keystroke dynamics (typing rhythm) and navigation patterns can flag an AI bot even if it has the correct login credentials.
3. Move from individual to ecosystem protection
Criminals collaborate; merchants must do the same. This means:
4. Move from 'one-and-done' to continuous verification
In the past, customers were verified at sign-up and then left alone. But as synthetic identities 'mature' over months, the standard in 2026 is moving toward continuous Electronic Identity Verification (eIDV).
5. Implement machine vision for returns
AI-generated 'damaged goods' photos are a rising headache. Fraudsters now use generative AI to create realistic images of broken electronics or 'empty boxes' to claim refunds without returning anything.
6. Adopt a 'Zero-Trust' voice policy
Since deepfake voice technology is now so accessible, a phone call from a 'manager' or a 'VIP customer' can no longer be taken at face value.
7. Use graph networks to spot the gangs
Fraud is rarely an isolated incident; it’s usually a network. Graph Neural Networks (GNNs), like Ecommpay’s award-winning solution, help you see the invisible links between seemingly unrelated accounts.
The threat is no longer a technical one; it is a systemic challenge at the intersection of psychology, technology, and regulation. By moving from reactive reimbursement to proactive, intelligence-led prevention, you can protect your business from the industrialised scale of modern fraud.