Add speech narration to an existing presentation. Drafts a `speech.json` file from the article text and wires it into the presentation by passing it to `PresentationShell`. Use when the user wants to add voice/narration/speech to a presentation, or when prompted by the article-presentation skill after building a presentation.
Add TTS (text-to-speech) narration to an existing presentation. Narration text lives in a speech.json alongside the slides; the presentation shell handles all of the UI (toggle, orb, first-time dialog, voice selector) automatically.
Uses the Browser Web Speech API (window.speechSynthesis) — no API key, no network call, no latency, works offline.
A presentation must already exist at src/app/articles/[slug]/presentazione/ with a working slides.tsx that renders <PresentationShell ...>.
No packages to install. No environment variables needed.
speech.jsonRun the generator to get a first pass:
npx tsx scripts/generate-speech-json.ts <slug>
This reads the article MDX, strips markdown, splits it by sections, and distributes text across slide slots. Output goes to src/app/articles/<slug>/presentazione/speech.json.
Important: the auto-generated text is a rough draft, not shippable. Refine it by hand:
"") for slides that should stay silent (title, visual-only beats, closing pause)speech.json format{
"voice": "it-IT",
"slides": [
{ "text": "Testo di narrazione per la slide 1..." },
{ "text": "Testo di narrazione per la slide 2..." },
{ "text": "" }
]
}
slides[currentIndex] directly.voice: BCP 47 language tag used as SpeechSynthesisUtterance.lang. Use "it-IT" for Italian. The browser auto-picks the best available system voice for that language.text: "" means no narration for that slide.slides.tsxThe integration is tiny — the PresentationShell does all the work when you give it speechData. In most cases you only need to add two lines: an import and a prop.
// slides.tsx
"use client";
import { PresentationShell } from "@/components/presentation/presentation-shell";
// ...slide imports...
import speechData from "./speech.json"; // ← add
export function PresentationSlides({ slug }: { slug: string }) {
const slides = [
{ key: "title", component: <Slide01Title key="title" /> },
// ...
];
return (
<PresentationShell
slug={slug}
speechData={speechData} // ← add (pass `null` to disable narration)
slides={slides}
/>
);
}
That's it. Do not import NarrationProvider, NarrationToggle, AudioOrb, or NarrationDialog into slides.tsx — the shell mounts all of them internally when speechData is non-null. Wrapping the JSX yourself will result in double providers and duplicated UI.
If narration should be optional (e.g. you want to ship the presentation first and add narration later), pass speechData={null} until the speech.json is ready.
All narration pieces live in src/components/presentation/:
presentation-shell.tsx — the shell that conditionally wires up narration when speechData is providednarration-provider.tsx — React context powered by the useNarration hookuse-narration.ts — hook managing speechSynthesis, voice selection, and word-boundary pulse eventsnarration-toggle.tsx — mute/unmute button (Volume2 / VolumeOff from lucide-react)audio-orb.tsx — pulsing orb driven by word-boundary events plus a smooth sine oscillationnarration-dialog.tsx — first-visit shadcn/ui Dialog that asks whether to enable narrationvoice-selector.tsx — dropdown of available Italian voices, shown when narration is onNarrationDialog: "Sì, attiva" vs "No, grazie" — the choice is persisted in localStorage.NarrationToggle appears in the header (next to the slide counter).speechSynthesis.cancel() and then speak the new slide's text with a ~1s delay so the slide animation can complete first.onboundary (word-boundary) events drive the AudioOrb pulse; between events the orb eases down, so it breathes with the speech.onboundary may not fire reliably — the orb falls back to its smooth oscillation, audio still workspnpm build passes.speech.json's slides array length equals the number of slides in slides.tsx.localStorage.