loader image

Why AI Can Support AML Compliance but Cannot Replace Professional Judgement

Lisa Simms
Lisa Simms

Director and Founder of AMLCC and AMLCC Consult

Why AI Can Support AML Compliance but Cannot Replace Professional Judgement

Artificial intelligence is becoming part of everyday compliance work. 

For AML-regulated designated non-financial businesses and professions (DNFBPs) – including accountants, lawyers, trust and company service providers, real estate professionals, and high value dealers / dealers in precious metals and stones – the real question is no longer whether AI will be used, but how it should be used. 

Global AML standards apply to these sectors and are built around a risk-based approach, with firms expected to understand their own risks and apply proportionate controls. 

Used properly, AI can be a valuable support tool. Used badly, it can weaken judgement, flatten risk assessments and PCPs into generic text, and push firms back towards the kind of tick-box compliance that supervisors increasingly reject. The opportunity is real, but so is the danger.

Where AI genuinely adds value

The most useful role for AI in AML compliance is as a research and productivity assistant. It can help staff, compliance teams and fee earners move through large volumes of open-source intelligence (OSINT) more quickly, identify patterns across public records, and summarise long material into manageable points for review.

That is particularly helpful in areas such as company registry searches, beneficial ownership research, adverse media screening, public filings, sanctions and PEP checks, litigation databases, and multilingual open-source intelligence. 

A tool that helps a professional review Companies House data, compare public information across jurisdictions, or spot inconsistencies in media reporting can save time and widen the scope of initial enquiries. It can also support staff with drafting follow-up questions, organising notes, and preparing summaries for internal review.

There is also real promise in AI-driven monitoring tools. In practice, transaction monitoring is easier to automate in banking and payments, where data is high-volume and structured. DNFBPs often work with lower volumes, different workflows and more bespoke matters, so they should be cautious about assuming that tools built for financial institutions will transfer neatly into their own environment.

The problem starts when AI is asked to ‘do’ the risk assessment

The difficulty begins when firms ask generative AI to complete business-wide risk assessments, client risk assessments or a set of policies, controls and procedures. This is where caution is essential.

Proper risk assessments or PCPs are not simply well-written documents. They are judgement exercises. 

FATF standards expect firms to understand their risks, maintain policies, controls and procedures to manage them effectively, and apply enhanced or simplified measures according to the level of risk identified. That only works when the assessment or PCPs reflect the firm’s actual services, delivery channels, client types, jurisdictions, source of funds and wealth exposure, transaction patterns, and previous experience of what has gone wrong in practice. 

Generative AI can produce polished wording, but polished wording is not the same as insight. Left to itself, the output is usually generic. Even when a professional gives the tool detailed prompts, the result is still only an outline process based on the information supplied. It cannot genuinely know the business in the way the business knows itself. 

AI tools cannot understand the subtleties of a long-standing client relationship, the significance of a last-minute change in instructions, or the difference between a normal transaction and one that simply feels out of place.

Why professional judgement still sits at the centre

The most important AML judgements in DNFBPs are often not mechanical. They are contextual. They come from experience, scepticism and familiarity with the client, the matter and the commercial reality behind a transaction.

An accountant may notice that a client’s explanation for a payment flow no longer matches the underlying business activity. 

A lawyer may sense that a proposed structure is more complex than the stated purpose requires. 

A TCSP may recognise that a beneficial ownership explanation is technically possible but commercially implausible. 

A real estate professional may spot unexplained urgency, an unusual source of funds, or a buyer who seems detached from the economics of the deal. 

A high value dealer may see a cash pattern or purchasing behaviour that does not ring true.

That instinct is not mystical. It is accumulated knowledge of the business and of the client. FATF standards require suspicious activity / transaction reporting by DNFBPs in specified circumstances, and a central part of an effective AML framework is the ability to recognise when something may be wrong and escalate it into a suspicious activity or transaction report where appropriate. AI cannot and should not carry the responsibility for that process. 

The risk of turning AML into a writing exercise

One of the biggest dangers with overusing AI is that it can make AML compliance look complete when it is not. A polished, generic document can create the illusion of rigour while adding very little that is specific to your business or clients, testable or proportionate.

Weak AML documents usually fail in the same way. They contain broad statements about customer due diligence, monitoring and reporting, but provide little evidence that the firm has really thought about its own risk profile. They sound interchangeable. A risk assessment that could be copied from one practice to another with only a few minor edits is not a real risk assessment at all.

That is also why generic AI-written content invites scepticism from supervisors. The issue is not style alone. The real problem is substance. If the document does not reflect ownership, judgement and firm-specific thinking, it goes against the whole purpose of a risk-based AML programme.

Effectiveness now matters more than mere existence

This point is becoming even more important because the global AML conversation has moved well beyond whether a policy exists on paper. FATF mutual evaluations assess both technical compliance and effectiveness, and FATF is explicit that effectiveness is now the main focus. Countries are expected to show that measures are working and delivering the right results; having laws and frameworks on the books is not enough. 

That same logic is increasingly relevant at business level. The question for a DNFBP is no longer only, ‘Do we have a risk assessment?’ It is also, ‘Does it drive better decisions?’ ‘Does it change the level of due diligence?’ ‘Does it help us identify unusual activity?’ ‘Does it lead to timely escalation, internal challenge and better reporting where needed?’

If AI helps a firm research faster, document decisions more clearly, or review open-source intelligence more efficiently, that supports effectiveness. If it produces generic assessments or PCPs that weaken critical thinking, it does the opposite.

The hidden risks firms should not ignore

There is also a practical governance issue. Official guidance on AI use highlights several risks that are highly relevant to AML work, including confidential information being entered into prompts, automation bias, and hallucinations. It also stresses the need to define clearly what AI can and cannot be used for. 

For DNFBPs, that matters. Client information may be commercially sensitive, legally privileged, confidential or protected by data protection law. Staff may also give too much weight to confident-sounding AI output, especially when they are under time pressure. A summary that looks convincing can still be wrong, incomplete or based on poor sources. In AML, that is not a minor flaw. It can distort risk ratings, weaken ongoing monitoring and create a misleading audit trail.

Practical recommendations for DNFBPs

A sensible approach is to treat AI as an assistant, not an assessor.

Use it for research support, draft summaries, comparison of open-source material, translation, drafting of internal questions, and administrative efficiency. Do not use it to replace the firm’s own PCPs, business-wide risk assessment, client risk assessments, or final judgement on suspicion.

Put an internal AI policy in place. Define which tools are approved, what data may and may not be entered, who is allowed to use the tools, how outputs must be checked, and where human sign-off is required. Official guidance is clear that there is no one-size-fits-all model, so the policy should reflect the nature of your services, clients and data. 

Require staff to verify important outputs. Check source material, especially for adverse media and beneficial ownership research. Record the human rationale behind the final decision. Train teams on hallucinations, automation bias and confidentiality risks. Most importantly, make sure the accountable, AML regulated professional remains completely accountable.

Conclusion

AI can make AML compliance faster, broader and in some areas more consistent. For DNFBPs, it is particularly useful in open-source research, information gathering and administrative support. But AML compliance has never been only about collecting information. It is about interpreting it, challenging it, and knowing when something does not feel right.

That is why AI can support AML compliance but cannot replace professional judgement. The strongest AML programmes will be the ones that use AI to sharpen human thinking, not to bypass it.

Explore how AMLCC’s features can keep your business completely compliant

The one-stop AML solution

We know AML

We’re internationally recognised AML experts
We work with most accountancy supervisors and the Law Society
Bespoke AML consultancy available for all sectors

The one-stop AML solution

We know AML

We’re internationally recognised AML experts
We work with most accountancy supervisors and the Law Society
Bespoke AML consultancy available for all sectors

What others have said

“We had the man from the ICAEW here yesterday to carry out a QAD practice review. We got a clean bill of health – not a single action point…That is in no small measure due to AMLCC so I just wanted to say ‘thank you’”

“Thank you for such a perfect and informative [solution]. You have given me a clear direction for my AML training and CPD.”

“I just wanted to say ‘thank you’ to you, Richard, and all the team at AMLCC for providing a service that really does minimise the burden of AML compliance.”

“What a refreshing pleasure working with a company who actually listens to the feedback from their customers and communicates with them!”

“Your team they have been excellent from the moment Fiona did a demo for me with only 15 minutes notice, and thoroughly going through the AMLCC product, answering the many questions I had! It was at this point at which I made up my mind this is the sort of business I want to work with for my AML.”

Making compliance easier

AMLCC newsroom
Scroll to Top