The FinCEN Files and why RegTech still cannot catch them
$2 trillion in suspicious transactions flagged by the largest banks — and processed anyway. We break down why existing RegTech architectures structurally cannot solve this, and what a platform built for the job would actually look like.
What the leak actually showed
In 2020 the leaked FinCEN files documented more than $2 trillion in suspicious transactions that the world's largest banks had filed SARs on — and then processed anyway. It wasn't a failure of detection. It was a failure of resolution: the bank noticed, reported, and still moved the money.
Why "noticed and moved" is the default
Walk inside a large compliance team and you find the same pattern everywhere. Screening tools return hundreds of alerts per analyst per day. Most are false positives. A handful are real but under-evidenced. A handful are real and well-evidenced — but by the time the analyst finds them, the transaction has cleared.
The root cause is structural: data is scattered across a dozen vendors, each returning a different shape, with no shared entity model, no shared audit trail, no shared context. The analyst is doing the integration work a platform should be doing.
What would actually work
You cannot catch sophisticated financial crime with a pile of screening APIs and a spreadsheet. You need one continuous context across the entire relationship: the documents that onboarded the client, the historical transactions, the corporate structure, the media footprint, the regulatory filings. An LLM orchestrator can reason over that context — as long as the context actually exists in one place.
That is exactly what we built Mercurium to do. Not a screening tool bolted onto another tool. One integrated surface that an AI can actually reason over.
The bet
We think the next generation of compliance platforms won't win by having more data sources. They'll win by having the same data, continuously connected, with AI that can act on the full picture. Everything we ship is a step toward that.