Spotting fake IDs in online AML checks
You run your usual AML searches. Sanctions clear. PEPs clear. ID documents match. Job done? Not quite.
AML searches and online identity checks are now standard procedure. Yet modern AI is shifting the landscape. Gone are the days of simple document forgery. Criminals now use deepfake technology to generate realistic video calls, voice messages and fake IDs that can—and do—fool online ID checks.
For small property, legal or accountancy businesses, understanding how deepfake fraud fits into your daily operations is critical.
AI deepfakes are already working
In early 2024, Arup (the global engineering firm behind icons like the Sydney Opera House) was conned into transferring nearly £25 million after a finance employee in their Hong Kong office received a video call from what looked and sounded like the firm’s CFO. Over 15 transfers totalling US$25 million were made before it became apparent that none of the participants was genuine—they were all deepfakes generated from livestream and meeting footage
Another well-publicised case is a 2019 incident involving a UK energy company. A cloned voice of the parent company’s German CEO persuaded their UK subsidiary to send €220k to Hungary, money that was quickly moved on via Mexico.
Voice-cloning tech has been used to mimic the bosses of WPP, Octopus Energy and others via WhatsApp and video, prompting near-miss transfers of large sums. Deepfake technology is good. And it’s getting better.
Can this really happen to your business?
If you’re a small firm, it’s easy to assume deepfakes are someone else’s problem. Big money, big names, big targets. But the reality is that most businesses use similar tools and the same processes, and that makes everyone vulnerable.
Let’s take a hypothetical example. A client provides a passport and a utility bill. Your system runs a document-based, online identity check, maybe a biometric match, and an address or credit reference check. All green. You jump on a video call to finalise things. The person looks and sounds exactly like the ID.
But what you’re seeing might be an AI-generated replica. Something designed to pass every layer of your check, except one: human instinct.
This is where small firms have the edge. You know your clients. You ask smart questions. You’re not stuck in red tape. The trick is using that advantage on purpose—not just by accident.
How to strengthen your defences
1. Add a live, unpredictable step
Standard document and biometric checks are essential, but not enough on their own. You need something dynamic. Ask the person on a live call to say a randomly generated phrase. Or hold up an everyday object (like a pen or a mug). Ask them to turn their head or change the lighting. These are small tests that throw off deepfakes.
2. Layer your verification
Don’t rely on one method. Combine a document check with biometric verification, and follow up with a credit or address validation from a separate source. When a fraudster fakes one layer, it sometimes doesn’t match the others.
Fraudsters are known to use a tactic called ‘repeaters’. This is where they test fake documents before they ever reach you, automatically uploading lots of slightly different versions to identity platforms to see what gets through. One version might get rejected, another accepted. Over time, they learn exactly how to design a fake that passes.
This is why layering your checks is so important. If one system is fooled, the others can still catch it.
3. Watch their behaviour
Warning signs of money laundering and fraud often come up in the day-to-day business relationship you have with your client. Be alert to signs like:
- Sudden urgency and unreasonable demands
- Unusual channels, like a client switching to WhatsApp or SMS unexpectedly
- Requests to change banking details at the last minute
- Logins or calls from unexpected locations
- IP addresses that don’t match the claimed location—if the person says they’re based in London but the IP shows up in another country, take that seriously
When something feels off, it usually is. Trust that instinct.
4. Train for it
Your employees are your first line of defence. Walk them through realistic deepfake scenarios. What would they do if they got a video call from a client who looks familiar, but something seems just a bit off? The more people practise double-checking and asking clients for dynamic deepfake tests, the more normal it becomes.
Deepfakes aren’t rare, futuristic or irrelevant
They’re here, they’re convincing and they’re slipping through standard procedures. But they’re not unstoppable. By layering your AML searches, mixing live interaction with data-based checks, and empowering your team to challenge what doesn’t sit right, you put the odds back in your favour.
This is where small firms win. You’re close to your clients. You’re nimble. And with the right mix of tools and curiosity, you’re harder to fool than any system alone.