This guide is for partners and managing directors at M&A boutiques deciding whether — and how — to introduce AI into their deal workflow. It assumes nothing about which tool you’ll choose, including ours.
The goal is to help you ask sharper questions and avoid the three most common mistakes:
- Treating AI as a single category (it isn’t — there are at least four meaningfully different categories competing for your budget)
- Running pilots on the wrong workflow (most failed AI pilots in advisory firms test the tool on the wrong type of work)
- Evaluating on demo data instead of your own (the gap between vendor demos and real deal documents is enormous)
We’ll cover each in turn.
The four categories of AI for advisory work
AI tools that could plausibly land on your radar fall into four buckets. They’re not all in the same business, even if they sound similar in a demo.
1. Horizontal finance AI
Examples: Hebbia, Rogo, ModelML.
Built for: hedge fund analysts, asset managers, banking research, occasionally corporate development teams at large firms.
Strengths: broad document reading at scale, equity research workflows, cross-document search, large internal knowledge bases.
Where they’re a weak fit for M&A boutiques: these tools are excellent at reading and answering questions about large document sets, but they’re not opinionated about the specific output formats M&A advisory needs. An Information Memorandum is not just “a long document about a company” — it has structural conventions, sector-specific sections, sell-side framing, and a buyer-facing tone that horizontal tools don’t model. You’d end up rebuilding most of that scaffolding yourself, on top of the platform.
2. Horizontal legal AI
Examples: Harvey, Legora.
Built for: law firms, in-house legal teams, large-firm transaction support.
Strengths: contract review, due diligence on legal documents, redlining, legal research.
Where they’re a weak fit for M&A boutiques: M&A boutiques touch legal documents but live in financial documents. The legal-AI platforms are tuned for the legal side of a transaction (NDAs, SPAs, disclosure schedules) — not the sell-side deliverables (teasers, IMs, management presentations, buyer outreach). They’re complementary to advisory work, not a substitute for it.
3. Vertical M&A AI
Examples: This is the category NaS_OS occupies. Few direct competitors today; this is the newest and least crowded category.
Built for: sell-side advisory boutiques and corporate finance firms.
Strengths: opinionated about M&A output formats, encodes firm-specific frameworks, designed around the actual workflow from data room to closing.
Where they’re a weak fit: if your work is primarily buy-side, sector research, or non-transaction advisory, the optimization is misaligned. Vertical M&A tools are built around the sell-side mandate as the unit of work.
4. General-purpose AI (ChatGPT, Claude, Gemini)
Built for: everyone.
Strengths: flexible, cheap per seat, no procurement friction.
Where they’re a weak fit for production M&A work: no document persistence between sessions, no source-citation guarantees, no encoded firm style, no security model designed for confidential deal documents. Many firms use these for first-draft scratch work; very few use them in client-facing deliverables.
What to actually evaluate
Three rules — in order of importance.
Rule 1: Test on your own documents, not theirs
The single biggest pattern in failed AI pilots: vendor demos use clean, well-structured documents that don’t resemble what comes out of a real data room. Real data rooms are scanned PDFs, founder-built Excel models with inconsistent naming, board decks with manually formatted tables, and the occasional Word doc someone exported badly.
Insist on running any pilot against three real, anonymized data rooms from your own past deals. Compare:
- How much of the financial data does it extract correctly without prompting?
- How does it handle inconsistencies between sources (e.g., management’s headline EBITDA vs. audited numbers)?
- Does it flag missing data or invent it?
- How does it handle non-English documents, if your deals touch them?
If the vendor pushes back on this, that tells you something.
Rule 2: Evaluate the output, not the chat
Every AI tool today has an impressive Q&A interface. That’s the easy part. What separates production tools from demos is the structured output: the Information Memorandum, the teaser, the management presentation, the working financial table.
Ask:
- Can it produce a complete first-draft IM in your firm’s format, not a generic one?
- Where is every figure on every page sourced from? (If the answer isn’t “this specific document, this specific page,” walk away.)
- How much editorial work remains before this is partner-reviewable?
- Can the output be regenerated when the source data changes, without manual re-work?
The right test isn’t “can it answer a question about this deal?” It’s “would I sign my name to the document it produced, after an afternoon of review?”
Rule 3: Test the failure modes, not the happy path
Demos show the happy path. The happy path is not where you’ll get hurt.
Ask the vendor to demonstrate:
- What the tool does when a critical data point is missing from the data room
- What the tool does when two source documents disagree
- What the output looks like when you ask for analysis the documents don’t support
- What happens when a document is intentionally misleading (this is rare but it happens)
A serious tool either flags the issue clearly or refuses to invent a number. A weak tool fills in plausible-looking content that is technically wrong. The second mode is dangerous and not always obvious in a demo.
The questions that matter on contract terms
Beyond product fit, four contract questions worth getting answered in writing:
-
Data residency and training. Where do your documents physically live, and is the vendor training models on your client data? For European boutiques, EU residency is increasingly non-negotiable for client mandates. “We don’t train on your data” should be a contractual commitment, not a marketing line.
-
Document isolation. Is each engagement’s documents fully isolated from every other client’s, or are they pooled in a shared vector database? Cross-contamination of client data is a credibility-ending event for a boutique.
-
Export and lock-in. Can you export every generated document and underlying data table in standard formats? If the vendor disappears tomorrow, what do you walk away with?
-
Pricing model alignment. Is pricing per-seat, per-deal, per-document, or per-volume of data? Per-seat pricing optimizes for vendor revenue; per-document pricing tends to align better with how boutiques actually consume the value.
How to run a real pilot
The pilot we recommend for any vendor — including us — is structured like this:
- Three real data rooms. Anonymized if needed, but real. From three different sectors, ideally.
- One existing IM as the gold standard. Pick a deal you closed where the IM was strong. The AI’s first-draft should be measured against it.
- One independent analyst as the evaluator. Not the partner who’s championing the tool. Someone with no skin in the decision.
- Two weeks, not three months. Real tools show their value or their limits in two weeks of focused use. Anything longer and you’re letting the vendor’s onboarding team paper over weaknesses.
At the end of the pilot, the evaluator should be able to answer one question: “How many junior-analyst hours did this actually save, on these specific documents, with the output we’d actually use?”
If the answer is “fewer than 20 hours per IM,” the tool isn’t ready for production. If the answer is “more than 40 hours per IM,” it likely is.
What this guide doesn’t cover
We’ve deliberately stayed out of three topics that warrant their own treatment:
- Buy-side workflows. Different category, different evaluation criteria. We may publish a separate guide.
- AI for post-merger integration. Not a typical boutique workflow.
- AI for due diligence on the buyer side. Closer to the horizontal legal/finance tools than to vertical M&A platforms.
If you’d like the guide above adapted for your firm’s specific deal mix, get in touch — we’re happy to walk through it with partners considering AI for the first time.