Skip to main content

The rising threat of AI in e-commerce fraud – and how to combat it

The rising threat of AI in e-commerce fraud – and how to combat it

Many innovations designed to make life easier for your customers are currently being weaponised by fraudsters.

Fraudsters are no longer just using stolen card numbers to bypass fraud controls; they are now deploying sophisticated methods utilising artificial intelligence (AI) to bypass traditional security hurdles. And the more AI technology evolves, the more difficult it becomes to detect.

From generative AI creating 'Frankenstein' synthetic identities to deepfakes that bypass biometric security, the tools of 2026 are double-edged swords. For the UK e-commerce sector, this has created a 'capability chasm' where smaller merchants are being left exposed. It is time to stop viewing fraud as a technical glitch and start seeing it for what it is: a systemic threat to the digital ecosystem.

According to data from the Crime Survey for England and Wales (CSEW), fraud incidents saw a 31% year-on-year increase last year. Within this surge, bank and credit account fraud rose by 30%, while consumer and retail fraud jumped by 23%.

With over 4 million incidents reported annually, the conversation around fraud is moving away from 'how do we fix the system?' and toward 'how do we protect people?'

The new face of fraud: When AI gets personal

Yesterday’s scammers relied on generic phishing emails and basic 'carding' techniques. Today, any new tool developed for convenience is almost immediately weaponised by criminals.

1. Synthetic identity fraud and 'Frankenstein' accounts

Criminals are using generative AI to weave together synthetic identities - part real, part fabricated. These personas build credible histories over months, making them nearly impossible to distinguish from genuine customers during onboarding. AI and deepfakes can even bypass biometric checks, providing 'live' faces that legitimise stolen or fake IDs.

2. Social engineering at scale

The modern fraudster doesn't just hack a system; they hack human psychology.

  • Deepfake support attacks: Real-time audio and video deepfakes allow scammers to impersonate customers on video calls to claim 'missing' high-value items or demand refunds.
  • Hyper-personalised phishing: By scraping social media and public data, AI creates 'spear-phishing' messages that reference actual recent order numbers, tricking customers into 'verifying' details on spoofed pages.

3. The industrialisation of fraud

The image of a lone hacker is outdated. Today, Organised Crime Groups (OCGs) use cloud-based infrastructure to run high-volume, high-speed scams. Whether it’s 'romance scams' over WhatsApp or 'investment scams' via Instagram ads, these operations are professionalised, borderless, and relentlessly efficient.

The £20m video call.
In a landmark 2024 case, an employee at Arup was tricked into transferring £20 million after attending a video call filled with deepfake avatars of the company’s CFO and other colleagues. It’s a stark reminder: in 2026, seeing is no longer believing.

Read the full article

Dan Milmo
Global Technology Editor at the Guardian

​​Why rules-based systems are failing

A significant 'uncomfortable truth' in the current landscape is the widening gap between large and small organisations. For years, e-commerce platforms relied on static rules; for example, 'If the IP is from a different country and the order is over £500, flag it.' Modern AI fraudsters easily mimic 'normal' behaviour to stay under these thresholds. This creates a competitive paradox:

  • Large players: Have the budgets for dedicated fraud, compliance, and IT teams to mitigate risk.
  • Mid-sized businesses, SMEs and startups: Often have one person handling AML, compliance, and risk simultaneously. This makes smaller businesses the 'weak links' that organised crime groups often target more aggressively.
The ‘mouldy fruit' scam.
In 2025, e-commerce platforms saw a surge in ‘refund-only’ fraud. Buyers used AI tools to doctor photos of perfectly good products, adding realistic ‘mould’ to fruit or ‘cracks’ to electronics, to claim instant refunds without ever returning the item.

Read the full article

Fran Lu
Reporter, Culture at the South China Morning Post

AI vs traditional rule-based fraud detection

Feature Traditional Rule-Based AI-Driven (2026 Standard)
Speed Batch processing or laggy checks Real-time (under 300ms)
Identity Static (Name, DOB, Address) Behavioural (Typing cadence, swipe pressure)
Adaptability Manual updates required Self-learning (detects new patterns)
Focus Technical exploitation Human manipulation
M&S and the ‘inventory denial’ bots.
Retail giants like M&S have had to bolster defences against ‘inventory denial’ bots. These AI-driven scripts flood sites and add thousands of items to baskets without checking out. This makes products appear ‘Out of Stock’ to genuine customers, damaging brand trust and driving shoppers to competitors.

Read the full article

Mark Tarre
News Chief at the eCommerceNews UK

How to combat AI fraud: A multi-layered defence

To protect your margins and customer trust, a proactive, ecosystem-wide approach is now essential.

Example of an AI-modified image of a laptop ‘damaged in transit’ being used to commit refund fraud.
Example of an AI-modified image of a laptop ‘damaged in transit’ being used to commit refund fraud.

1. Prioritise transaction monitoring

For smaller businesses with limited budgets, transaction monitoring is the first line of defence. While basic onboarding screening is useful, it rarely catches a sophisticated synthetic ID. Robust monitoring identifies red flags after the fact, such as a 'good' account suddenly making multiple small, rapid-fire transactions or frequent chargebacks.

2. Implement behavioural biometrics

AI can steal a password, but it struggles to replicate the subconscious way a human interacts with a device. Analysis of keystroke dynamics (typing rhythm) and navigation patterns can flag an AI bot even if it has the correct login credentials.

3. Move from individual to ecosystem protection

Criminals collaborate; merchants must do the same. This means:

  • Intelligence sharing: Moving past the 'silo' mentality to share data on known fraudulent networks.
  • Hybrid AI models: AI must be beaten by AI, but human oversight remains critical. The EU AI Act and similar UK guidance highlight that AI cannot be left to run without human checks to catch new, 'out-of-the-box' phishing tactics.

4. Move from 'one-and-done' to continuous verification

In the past, customers were verified at sign-up and then left alone. But as synthetic identities 'mature' over months, the standard in 2026 is moving toward continuous Electronic Identity Verification (eIDV).

  • What it is: Instead of a single snapshot at onboarding, your system periodically 'checks in.' It looks for anomalous changes in a customer’s data profile, like a sudden change in email domain to a disposable one or a phone number switching to a Voice over IP (VoIP) service.
  • Why it works: It catches 'sleeper' accounts that were created legally but have since been taken over or 'ripened' for a large-scale fraud attempt.

5. Implement machine vision for returns

AI-generated 'damaged goods' photos are a rising headache. Fraudsters now use generative AI to create realistic images of broken electronics or 'empty boxes' to claim refunds without returning anything.

  • The fix: Use machine vision tools that scan return photos for 'AI artefacts' (microscopic inconsistencies in pixels that humans can't see).
  • Added layer: Require photo metadata (EXIF data) that proves the photo was taken at the customer's GPS location at a specific time, rather than being a generated file.

6. Adopt a 'Zero-Trust' voice policy

Since deepfake voice technology is now so accessible, a phone call from a 'manager' or a 'VIP customer' can no longer be taken at face value.

  • The strategy: Establish an internal rule that no sensitive action—like an emergency password reset or a high-value refund override—can be triggered by voice alone.
  • The 'Out-of-Band' check: If a customer calls with a high-stakes request, the agent must send a verification code to the customer’s registered app or secondary email before proceeding.

7. Use graph networks to spot the gangs

Fraud is rarely an isolated incident; it’s usually a network. Graph Neural Networks (GNNs), like Ecommpay’s award-winning solution, help you see the invisible links between seemingly unrelated accounts.

  • The power of connection: A GNN can flag that three different accounts, with different names and addresses, all happen to use the same device fingerprint or have logged in from the same obscure IP range within the same hour.
  • Ecosystem protection: This is where the 'Me to We' shift happens. By seeing these clusters, you can block an entire network of bots before they even reach the checkout.

The threat is no longer a technical one; it is a systemic challenge at the intersection of psychology, technology, and regulation. By moving from reactive reimbursement to proactive, intelligence-led prevention, you can protect your business from the industrialised scale of modern fraud.

Search result tabs

Searching...