This is the third article in a series on how artificial intelligence (AI) is reshaping due diligence best practices.
Compliance teams are under pressure. Private equity firms, corporates, and banks all face the same squeeze: do more diligence, faster, with fewer resources. It’s no surprise that AI-driven tools have flooded the market, promising to automate away the bottlenecks. Speed, scale, low cost – the pitch is compelling.
Deloitte’s 2025 M&A Generative AI Study found “overall, 86% of organizations have incorporated GenAI into aspects of their M&A workflows or daily activities.” And 35% are actively using GenAI in the target screening and initial due diligence phase of the M&A lifecycle. Whereas McKinsey & Company suggests that agentic AI could be the next significant innovation in KYC/AML.
But here’s the question no one wants to ask out loud: are we trading speed for accuracy?
Many of us have spent years watching how technology transforms risk assessment activities, and some are increasingly concerned that we’re not having honest conversations about AI’s limitations in this space. The industry needs to mature beyond the hype cycle and grapple with what these tools do well, where they fail, and how we build guardrails around their use.
What AI Actually Does (and Doesn’t Do)
Most due diligence AI runs on machine learning models trained to identify patterns across massive text datasets. Feed it enough data, and it gets remarkably good at surfacing potential red flags from public sources faster than any human team could.
But – and this matters – AI often doesn’t understand what it’s reading. It recognizes patterns, not meaning. It can’t verify facts independently. It has no concept of source reliability or editorial judgment. Every output is only as good as the training data and the assumptions baked into the model.
If you don’t understand these constraints, you’re flying blind.
Where Technology Actually Helps (with AI and Due Diligence)
When deployed thoughtfully, AI genuinely improves certain parts of the process. Early-stage scoping is the obvious win. You can quickly map jurisdictions, identify related parties, and flag obvious adverse media for high-profile subjects who generate thousands of search results. That’s useful. It lets analysts spend their time on analysis instead of data collection.
AI also handles volume well. Large datasets, multiple entities, cross-border research – these are areas where automation can meaningfully accelerate workflows. And at the back end, it can serve as a quality check, helping ensure nothing material slipped through before you finalize a report.
But these are supporting roles, not primary analysis.
The Accuracy Problem Nobody Wants to Discuss
Here’s what clients should be concerned about:
- AI aggregates information from online sources that are often outdated, incomplete, or flat-out wrong. Without human judgment to assess credibility, you end up with outputs that sound authoritative but may be fundamentally flawed.
- Context is another blind spot. AI spots correlations, not causation. It can tell you a pattern exists but not whether it matters or what you should do about it. That gap between detection and interpretation is where real risk lives.
- And then there’s the false positive problem. Anyone working with these tools knows this pain intimately. Common names get conflated. Multiple individuals blur together. You end up with a data dump that requires hours of human analysis to untangle – which defeats the entire efficiency promise.
Structural Constraints That Won’t Go Away
Most AI tools are limited to Google-indexed content. They can’t access court records, regulatory filings, corporate registries, or specialized databases directly. The truly critical information – litigation records, regulatory enforcement actions, beneficial ownership structures – still requires manual, jurisdiction-specific research.
There’s also a recency bias problem. Performance degrades sharply for anything older than five to ten years. For cross-border work, language and cultural nuances introduce another layer of error. And AI struggles to distinguish primary sources from secondary reporting, which means you still need humans to trace information back to authoritative materials.
These aren’t temporary limitations. They’re structural constraints that won’t be solved by the next model update.
Building Responsible Frameworks
So, what does responsible use of AI look like?
First – every AI-generated finding needs human verification against primary sources. No exceptions. Automation can surface leads, but humans must validate them before they go into a report that carries legal and reputational weight.
Second – strict data boundaries. AI should never touch proprietary information, personal identifiers from confidential sources, or anything that could create privacy or security risks. The convenience of feeding everything into a model isn’t worth the exposure.
Third – formal governance. At our firm, we established an internal AI committee that vets every tool before deployment – assessing not just efficiency but quality, ethics, and regulatory compliance. We test against known cases, monitor performance over time, and maintain clear policies about where AI can and cannot be used.
This isn’t about being anti-technology. It’s about being serious about risk management in a field where errors have consequences.
Augmented, Not Automated
AI will keep evolving. The tools will get better. Efficiency gains are real and valuable. But the notion that we can or should fully automate due diligence is misguided.
The future isn’t automated research. It’s augmented research – experienced analysts supported by technology, governed by clear policies, anchored in professional judgment.
In a field where findings influence investment decisions, regulatory outcomes, and reputational standing, there’s no room for cutting corners. The clients who come to us shouldn’t be looking for the fastest answer. They should be looking for the right answer. That still requires human expertise, ethical oversight, and the willingness to say when automation isn’t enough.
Technology should enhance what we do, not replace what we do. Getting that balance right isn’t optional anymore – it’s the baseline for credible work in this space.
Ready to Transform Your Due Diligence Process?
Discover how IntegrityRisk’s InitialLookTM Tech-enabled | Human-verified due diligence product can enhance your due diligence research.
Interested in our prior two articles? You can find them here:
AI in the Financial Sector: Challenges Remain, but the Future Looks Bright

