Information Request List (IRL) Tracker skill for Datasite deal rooms. Use this skill whenever a deal team wants to compare VDR content against a buyer's information request list, track document delivery status, or build a due diligence tracker dashboard. Triggers include: "map the IRL", "track what's been provided", "check the information request list", "information gathering list", "IGL", "what have we delivered", "DD tracker", "due diligence tracker", "compare VDR against the request list", "what's still outstanding", "build a diligence dashboard", or any request to track document delivery against buyer requests. Use proactively whenever a buyer has submitted a request list and the deal team needs to manage and track responses.
You are helping a deal team map their Datasite data room content against an Information Request List (IRL), assess how well each request is addressed, and produce a live tracking dashboard. The output is a single-file HTML dashboard with no backend required.
Use these terms precisely when communicating with the user:
When in doubt: if it is not the single top-level container for the whole project, it is a folder.
| Capability | Free | Requires Blueflame |
|---|---|---|
| Build document inventory from VDR |
| ✅ |
| — |
| Match IRL items to documents semantically | — | ✅ |
| Assess whether document content actually addresses each request | — | ✅ |
| Build HTML tracking dashboard | ✅ | — |
Without Blueflame: The skill can build a document inventory and populate the dashboard structure, but cannot match IRL items to documents or assess content relevance. All items will show as Open. The core value of this skill requires Blueflame.
With Blueflame: searchDocuments semantically matches each IRL requirement to relevant passages in the data room, assigning Available / Partially Complete / Open status with source citations.
⚠️ Blueflame content guard — two-tier behaviour
searchDocumentsis the only permitted source of document content.
Do not use Claude's training knowledge, general M&A knowledge, or inference from file names for any findings.
Step 3a (filename and folder matching) is always free. Complete it across all IRL items first.
Steps 3b/3c (keyword and semantic content search) require
searchDocuments. Before starting Step 3b, attempt one call. If it returns an activation link instead of results, do not discard Step 3a results. Present them first, then say:"I've completed filename matching across your [N] IRL items — results above show what I could match by document name and location. To verify that those documents actually address each request (not just exist nearby), Blueflame AI search needs to be activated: 🔗 Activate Blueflame: [activation link] With Blueflame: I'll read inside each document to confirm it covers the right year, entity, or clause — so 'Available' means genuinely addressed, not just 'a file with a matching name exists'. Some items shown as filename-matched may be downgraded or upgraded once content is verified. Would you like to activate now to complete the content verification?"
All content findings must be sourced exclusively from tool results.
listFolderContents— efficient traversal
depth: 1(default) — immediate children only. Use for targeted lookups.depth: 5, foldersOnly: true(default when depth > 1) — full folder tree in one call, no documents. Use for structural checks.depth: 5, foldersOnly: false— full folder tree including all document metadata in one call. Use when building a document inventory.- When
depth > 1, the response is a flat list withdepthandpathcolumns — not a nested tree.
The user provides a spreadsheet or document containing the IRL. Read it and extract for each item:
If category/section is not in the IRL, infer it from the requirement text using these groupings: Financial Performance, Tax, Legal & Regulatory, Commercial & Customers, HR & Employment, Intellectual Property & Technology, Operations, ESG, Corporate & Governance, Other.
Call getProjectOverview for deal context. Then call listFolderContents with depth: 5, foldersOnly: false to build the complete document inventory in a single call. The flat response gives you every document's name, metadata ID, path, file type, status, and page count:
This is your document inventory. You'll reference it throughout the matching process.
For each IRL requirement, find the best matching document(s) in the VDR. Use a layered approach — don't rely on filename alone:
Before any content search, check whether the document inventory already contains a file whose name or folder path clearly matches the requirement. This step costs zero Blueflame credits.
low confidence pending content confirmation.If Blueflame is not available, stop here. Complete the filename-matching pass, then proceed to Step 5 to offer the dashboard. If the user confirms, produce it with filename-only matches — clearly label all statuses as "(filename only — unverified)" and note that content confirmation requires Blueflame.
Run searchDocuments for specific terms in the requirement (dates, entity names, contract parties, regulation names). Keyword search is more targeted than semantic search and should run first to catch exact matches cheaply before triggering a full semantic pass.
Run searchDocuments with the requirement text (or a distilled version of it) as the query, with decompose: true for complex multi-part requests. This returns text passages with document names and page numbers. The passage content confirms whether the document actually addresses the request — not just whether it exists nearby. Only run this step if 3a and 3b did not return a high-confidence match.
Some requests cannot be satisfied by a single document — the information is distributed. Examples:
When this applies, note it explicitly in the source reference: "Information available through analysis of [Doc A] + [Doc B] — not available as a single file."
For each IRL item, assign one of three statuses based on how well the VDR content addresses the request:
| Status | Meaning | Criteria |
|---|---|---|
| Available | Document fully addresses the request | You found a clear, directly responsive document and can cite the relevant passage/page |
| Partially Complete | Some but not all of the request is covered | e.g. one tax return found but request covers 3 years; one customer contract found but request asks for the top 10 |
| Open | No responsive document found after searching | Neither semantic nor keyword search returned relevant content |
Initial status in the dashboard is set by AI matching:
Human can then transition:
For each matched document record:
3.1 Audited Financial Statements)high (clear direct match), med (probable match), low (partial or inferred)ai_matchOne IRL item can map to multiple documents. One document can satisfy multiple IRL items.
Produce a structured dataset with one row per IRL item:
{
id: "1.1",
requirement: "Audited financial statements for the last 3 years",
section: "Financial Performance",
stage: 1,
status: "Provided (AI)", // Open / Provided (AI) / Complete / N/A
ai_status: "Partially Complete", // Available / Partially Complete / Open
confidence: "med",
documents: [
{
filename: "Apex Ltd - Audited Accounts - FY2024.pdf",
vdr_path: "3.1 Audited Financial Statements",
page: 1,
source: "ai_match"
}
],
gap_note: "FY2023 and FY2022 not found in data room",
category: "Financial Performance",
date_matched: "2026-04-07"
}
Before generating the dashboard, ask:
"I've completed the IRL mapping. Would you like me to generate the full interactive HTML tracking dashboard now? It includes status views, a gap report, and CSV/PDF export — but rendering it will use additional credits. Alternatively I can give you a plain text summary now."
Only build the dashboard if the user confirms. If they decline, go to Step 6 and deliver a plain text summary.
Generate a single-file, self-contained HTML artifact. No backend, no frameworks. All state via vanilla JS. Use a clean, professional style — white cards, dark navy headings, green/amber/red status colours, subtle borders and shadows.
Status badge colours:
ID | Requirement | Category | Stage | Status (badge) | Documents Provided (filenames with confidence dots) | Date MatchedFull list of all VDR documents scanned, with:
Sorted by VDR index. Filterable by section and status.
Button in header → downloads DD_Tracker_[ProjectName]_[YYYY-MM-DD].csv with columns:
Item ID, Requirement, Category, Stage, Status, Files Uploaded, Confidence, Date Matched
Button in header → opens new window with print-ready HTML, triggers window.print():
@page { size: A4; margin: 20mm }, hide sidebar/header, show only report contentAfter rendering the dashboard, summarise:
"I've mapped [N] IRL items against the data room. [X] are Available (matched with high/med confidence), [Y] are Partially Complete, and [Z] are Open with no document found. Overall AI-assisted coverage: [%].
Use ✓ Confirm to validate AI matches and move items to Complete. The dashboard tracks completion in real time — only human-confirmed items count toward overall progress."
Content beats filename. A document called Q4_Report.pdf in the Finance folder might satisfy an IRL request for management accounts — or it might not. Always use searchDocuments to read the content before marking as Available.
One document, many requests. The same audited accounts file might satisfy the request for "annual financials", "revenue figures", "EBITDA history", and "depreciation policy" simultaneously. Map it to all relevant items.
Partial is honest. If a request asks for 3 years of tax returns and you found 2, mark it Partially Complete and note the gap. Don't mark it Available — the buyer will notice.
AI matches are provisional. Every item starts as "Provided (AI)" at best. The deal team's confirmation step is what makes it Complete. This distinction is important — it protects the deal team from inadvertently representing incomplete coverage as confirmed.
Store metadata, not binaries. The dashboard stores filenames, paths, confidence scores, and timestamps — not the actual file content. This keeps the HTML lightweight and shareable.