Extract bounded OCR evidence from local image files with macos-ocr-mcp, using macOCR only as a fallback reference when MCP coverage is insufficient. Use when downstream refinement or review needs visible-text evidence, not final semantic decisions.
Use this skill to extract OCR text evidence from local image files in the presentation image pipeline.
The preferred path is macos-ocr-mcp with ocr_image(file_path).
Use it when a caption, object-isolation result, or content classification decision depends on visible text inside an image.
For table-like images, this skill may reference a local Apple Vision document-structure helper script as a secondary surface. That helper is not treated as a standalone MCP and is owned by the table-structure promotion skill, not by this OCR skill.
Table -> Row -> Cell normalization is the current taskvendored-mcp-onboardingmacos-ocr-mcp.ocr_image(file_path)macos-ocr-mcp wrappermacOCR CLI for interactive region OCR or future --input <file> comparison runsreferences/runtime.mdmacos-ocr-mcp on one image first.macOCR only as a reference or future comparison path unless the local CLI has been explicitly activated in this workspace.macos-ocr-mcp