Import highlights and documents from Readwise into the wiki using the Readwise CLI (not MCP). Searches and browses interactively, then delegates to fetch-readwise-document and fetch-readwise-highlights for streaming large content to disk.
Pull the user's reading history and highlights from Readwise, then compile them into wiki pages.
Use the readwise CLI tool for all Readwise access. Do not use MCP tools, Readwise APIs directly, or any other method — the CLI handles authentication, pagination, and rate limiting. All commands run via Bash.
Use the Readwise CLI freely to search, browse, and explore the user's library. When it's time to actually pull large content into raw/, delegate to:
fetch-readwise-document — streams a full Reader document to disk without loading the body into context.fetch-readwise-highlights — vector-searches highlights, groups by parent doc, and writes highlight collections to disk.Run these checks in order. Stop and fix each issue before continuing.
which readwisewhich node
npm install -g @readwise/clicurl -fsSL https://nodejs.org/dist/v22.15.0/node-v22.15.0-darwin-arm64.tar.xz | tar -xJ -C /usr/local/lib
ln -sf /usr/local/lib/node-v22.15.0-darwin-arm64/bin/node /usr/local/bin/node
ln -sf /usr/local/lib/node-v22.15.0-darwin-arm64/bin/npm /usr/local/bin/npm
ln -sf /usr/local/lib/node-v22.15.0-darwin-arm64/bin/npx /usr/local/bin/npx
Then run npm install -g @readwise/clireadwise reader-list-documents --limit 1readwise login (opens the user's browser for OAuth — wait for it to complete).Do not proceed until the CLI is installed and authenticated.
Suggest importing by topic first — it's the most useful starting point:
Use the Readwise CLI to explore the user's library interactively:
# Search for documents by topic
readwise reader-search-documents --query "<topic>" --limit 20 --json
# List recent documents
readwise reader-list-documents --limit 20 --json
# Search highlights by topic
readwise readwise-search-highlights --vector-search-term "<topic>" --limit 30 --json
Show results to the user and let them pick what to import. This is the interactive phase — it's fine to have search results in context here since they're just metadata (titles, authors, snippets).
Note: The CLI --json flag outputs raw JSON arrays, not objects with a results key. Pipe through jq carefully — e.g. jq '.[].title', not jq '.results[].title'.
CLI flag gotcha: reader-get-document-details uses --document-id (NOT --id). See fetch-readwise-document skill for the full flag reference.
Once you know what sources were found, update wiki/home.md right away — before fetching or ingesting anything. Write a brief overview of what's coming: the topics found, how many sources, what the wiki will cover. This gives the user something to read and shows progress while the import runs.
Import in small batches. Fetch and fully ingest 3-5 sources first so the user can see the wiki taking shape before importing more. A wiki with a few well-connected pages is more useful than a queue of unprocessed raws. After the first batch is ingested and the user can browse it, ask if they want to continue with more.
Once the user has picked what to import, delegate to the appropriate skill:
fetch-readwise-document with the selected doc IDs. It handles metadata, streaming the body to disk, and verification.fetch-readwise-highlights with the agreed-upon search queries. It handles vector search, deduplication, grouping, and writing highlight files.Both skills chain into ingest automatically to create wiki pages from the raws.
Important: All files in raw/ must be markdown (.md), never JSON. Temp JSON files from CLI queries go in /tmp/, not raw/. If you need to store structured data from Readwise, convert it to a readable markdown document before saving to raw/.
This is the most important performance step. After fetching raw files, do NOT ingest them one at a time. Use the Agent tool to parallelize:
wiki/index.md and wiki/home.md to understand what exists.Agent tool. Launch them all in a single message so they run concurrently. Each subagent brief should include:
wiki/home.md with the full pictureExample dispatch pattern:
Agent({
prompt: "Ingest raw/source-a.md into this wiki. Schema is in CLAUDE.md. Current index: [paste index]. Create source-summary at wiki/sources/, propagate claims to concept pages, cross-link from 2-3 existing pages, update index.md and log.md.",
description: "Ingest source-a"
})
Agent({
prompt: "Ingest raw/source-b.md into this wiki. Schema is in CLAUDE.md. Current index: [paste index]. Create source-summary at wiki/sources/, propagate claims to concept pages, cross-link from 2-3 existing pages, update index.md and log.md.",
description: "Ingest source-b"
})
// ... all in the same message for parallel execution
Why this matters: Serial ingestion of 5 sources takes 5x as long. Parallel subagents cut wall-clock time dramatically. The dedup pass at the end is cheap.
After all subagents complete:
wiki/index.md entries (multiple subagents may have added the same pages).wiki/home.md to reflect everything that was imported.wiki/log.md has timestamped entries.Report what was imported and what pages were created.