Find translation issues in ULT/Hebrew/Greek texts. Covers 94 issue types across 7 categories. Use when asked to identify issues, find what needs notes, or analyze a passage for translation concerns.
Prefer workspace MCP tools in restricted runs:
mcp__workspace-tools__fetch_door43mcp__workspace-tools__compare_ult_ustmcp__workspace-tools__detect_abstract_nounsmcp__workspace-tools__check_tw_headwordsmcp__workspace-tools__build_tn_indexIdentify translation issues in biblical text that require translation notes. This skill focuses on recognition and classification - note writing is handled separately.
When invoked with arguments like 2sam 1 or psa 58 local:
local or fetch (default: fetch)Source modes:
fetch (default): Grab editor-approved ULT/UST from unfoldingWord masterlocal: Look for local files in data/published_ult/ and data/published_ust/Examples:
/issue-identification psa 58 # Fetch from master (default)
/issue-identification psa 58 fetch # Same as above, explicit
/issue-identification psa 58 local # Use local files
Book abbreviations follow standard 3-letter codes or common variants:
There are two ways to use this skill:
Fetch mode (default) - Use mcp__workspace-tools__fetch_door43 for ULT and UST.
Local mode - Use local files:
# Copy local files (NN is book number, e.g., 19 for PSA)
cp data/published_ult/<NN>-<BOOK>.usfm /tmp/book_ult.usfm
cp data/published_ust/<NN>-<BOOK>.usfm /tmp/book_ust.usfm 2>/dev/null || true
If UST is missing, continue without it (first pass, UST not generated yet). If ULT is missing, error (need at least ULT to identify issues).
Extract alignment data and plain text using usfm-js:
# Parse ULT - get alignments and plain text
node .claude/skills/utilities/scripts/usfm/parse_usfm.js /tmp/book_ult.usfm \
--chapter <N> \
--output-json /tmp/alignments.json \
--output-plain /tmp/ult_plain.usfm
# Parse UST - get plain text only (if UST exists)
node .claude/skills/utilities/scripts/usfm/parse_usfm.js /tmp/book_ust.usfm \
--plain-only > /tmp/ust_plain.usfm 2>/dev/null || true
EDITOR_NOTES="data/editor-notes/<BOOK>.md"
if [ -f "$EDITOR_NOTES" ]; then
cat "$EDITOR_NOTES"
fi
If editor notes exist for this book, read them carefully. These are observations from human editors who have already been working through the text. They may flag:
Incorporate these observations into your analysis — they should heighten your attention to the flagged patterns, not replace your systematic review.
Where UST diverges from ULT (beyond synonym/clarity changes), there may be a translation issue:
Use mcp__workspace-tools__compare_ult_ust with ultFile, ustFile, and chapter.
Output shows verses where UST made significant changes, with suggested issue types:
| Pattern | Suggested Issue |
|---|---|
| UST adds clarifying words | figs-explicit |
| UST removes repetition | figs-doublet, figs-parallelism |
| UST restructures clause order | figs-infostructure |
| UST replaces figurative language | figs-metaphor |
| UST unpacks abstract noun | figs-abstractnouns |
| UST changes passive to active | figs-activepassive |
| UST expands/explains phrase | figs-idiom |
Skip this step if UST file doesn't exist.
Abstract nouns -- run the detection script:
Use mcp__workspace-tools__detect_abstract_nouns (alignmentJson, format: "tsv").
Passive voice -- identify ALL passive constructions during your verse-by-verse analysis (no script needed). Read the detection instructions in figs-activepassive.md for the passive voice pattern (auxiliary "be" + past participle), stative adjective exclusions, and worked examples. Every passive construction needs a note.
Merge detected issues into final output.
When you just have English text (no USFM, no alignments), use --text to run detection directly. This skips source language morphology checks but still finds abstract nouns. Passive voice is identified by Claude during analysis (see figs-activepassive.md).
Use mcp__workspace-tools__detect_abstract_nouns with text and format: "tsv" for quick plain-text checks.
Output uses "text" as the reference since there's no verse structure. Source language fields (morph, lemma) will be empty.
IMPORTANT: Before flagging any name or unknown concept for translate-names or translate-unknown, check if it has a tW article. If a tW article exists, generally NO note is needed.
Use mcp__workspace-tools__check_tw_headwords with a terms array (single or multiple terms).
The script returns JSON with matches (have tW articles) and no_match (may need notes):
Exception: If a term with a tW article is used FIGURATIVELY, use the appropriate figurative note (figs-metaphor, figs-metonymy, etc.) instead of translate-names/translate-unknown.
After running detection scripts, analyze the text systematically using this four-pass approach. This ensures thorough coverage while managing cognitive load.
If Step 3 produced /tmp/ult_ust_diff.tsv, review it first to prime your attention on verses where UST diverged:
diff_type and suggested_issueThis gives you a head start on where translation issues likely exist.
Read through the entire chapter to understand the big picture:
For any unusual phrases noticed, check the published TN index first, then fall back to raw grep:
Use mcp__workspace-tools__build_tn_index with lookup="phrase" for keyword classification precedent. Fallback: raw grep data/published-tns/tn_*.tsv.
For each paragraph or segment identified in Pass 1:
When uncertain about a construction, use mcp__workspace-tools__build_tn_index with lookup="keyword" or issue="figs-metonymy" for fast precedent lookups. Check prior decisions with grep "keyword" data/quick-ref/issue_decisions.csv. Fallback: raw grep data/published-tns/tn_*.tsv.
For each verse (or small verse group), systematically check all issue types using the TaskCreate tool.
Creating the checklist:
Use TaskCreate to generate one task per issue type from data/translation-issues.csv (all ~93 types). Example: "Check figs-metaphor in v.3", "Check figs-simile in v.3", etc.
Working through the checklist:
"figs-metaphor: 'shield' as protection - yes" or "figs-metaphor: none"Integrating detection script output:
figs-activepassive.mdcheck_tw_headwords.py before flaggingcheck_tw_headwords.py before flagging translate-names/translate-unknownbuild_tn_index.py --lookup), then data/published-tns/ for similar phrasesdata/issues_resolved.txt and data/templates.csv have final authority on how issues are classifieddata/quick-ref/issue_decisions.csvThe goal is coverage: it's easier for reviewers to delete a suggested issue than to identify one from scratch. When in doubt, include it.
For Psalms/Prayers: Make an extra pass for:
{}.?. Do not stop at the first poetic line.For Proverbs: Check:
When identifying grammar-connect issues, capture sufficient context:
Too Narrow (Avoid):
Appropriate Context:
Rule: Include enough text that a reader can see the logical relationship being identified.
Make an extra pass looking for quotation marks, quotes-in-quotes, and indirect quotations that should be marked.
After completing issue identification, run these verification steps to catch misclassifications.
When you encounter these words, ALWAYS check the specific issue listed:
| Keyword | Always Check |
|---|---|
| man, men, brothers, sons, fathers | figs-gendernotations (generic masculine?) |
| like, as, than | figs-simile before figs-metaphor |
| hand, hands, eyes, face | figs-metonymy or figs-synecdoche (body part for action/person?) |
| heart | figs-metaphor (heart = thoughts/feelings/will; see template) |
| all, every, never, always | figs-hyperbole (exaggeration for emphasis?) |
| the righteous, the wicked, the poor | figs-nominaladj (adjective as noun?) |
Before finalizing a tag, check if a related issue fits better:
| If considering... | Also check... | Key distinction |
|---|---|---|
| writing-pronouns | figs-gendernotations | Unclear referent vs. generic masculine |
| figs-metaphor | figs-simile | No comparison word vs. explicit "like/as" |
| figs-metonymy | figs-synecdoche | Associated thing vs. part/whole relationship |
| figs-idiom | figs-metonymy / figs-synecdoche | Fixed cultural expression vs. live figure (body-part triple) |
| figs-doublet | figs-parallelism | Word-level pair vs. clause-level repetition |
| figs-doublet | figs-hendiadys | Synonyms for emphasis vs. one modifies other |
| figs-idiom | figs-metaphor | Fixed expression vs. live comparison |
| figs-hyperbole | figs-merism | General exaggeration vs. two extremes = whole |
| figs-rquestion | figs-exclamations | Question form vs. exclamation form |
| figs-explicit | figs-ellipsis | Adding background info vs. supplying omitted words |
| grammar-connect-logic-goal | grammar-connect-logic-result | Intended outcome vs. unintended consequence |
When the same phrase could be classified under multiple figurative issue types (e.g., synecdoche, metonymy, and idiom for "a lip of falsehood"), these represent competing analyses of the same feature, not complementary layers. Pick the single best fit.
Decision hierarchy for body-part and cultural expressions:
This hierarchy reflects content team decisions in data/issues_resolved.txt. Grammar-layer issues (figs-abstractnouns, figs-activepassive, figs-possession) remain independent and always coexist alongside a figurative tag on the same phrase.
When classifying body parts, nature imagery, or cultural concepts as metonymy vs metaphor, consult the authoritative lists in figs-metonymy.md and figs-metaphor.md (under "Authoritative Biblical Imagery" sections).
After completing all identification, review your output:
Tag verification: For each issue tagged, can you point to specific criteria in the skill definition it meets? If unsure, re-read the skill file.
Cross-verse interpretive consistency: Scan the full issue list for explanations that reference or depend on adjacent verses. Specifically check:
writing-pronouns issue resolves a referent from another verse (e.g., "it refers to X in the previous verse"), verify that your explanation of X in that other verse is compatible. If v9 says "inheritance" is a metaphor for people, v10 cannot say "it" refers to the land.Duplicate check: Did you tag the same phrase twice for issues that are really one? (e.g., tagging both figs-doublet and figs-parallelism for the same word pair) Also check for competing figurative analyses: if the same phrase has two or more figurative tags (e.g., figs-synecdoche + figs-metonymy + figs-idiom), keep only the single best fit using the decision hierarchy in "Competing Figurative Analyses" above.
Missing overlap check: Are there phrases that genuinely need two tags? (e.g., a simile that also contains an abstract noun - both figs-simile AND figs-abstractnouns may apply) Abstract nouns, passives (figs-abstractnouns, figs-activepassive) are script-detected and exist at a different analytical layer than figures of speech. They always coexist -- a figurative issue on the same phrase does not replace a grammar issue. Other grammar-level issues (figs-possession, figs-ellipsis, figs-nominaladj) should also generally not be dropped or merged with figurative issues. But multiple figurative issue types on the same phrase (figurative+figurative, not grammar+figurative) represent competing analyses -- see "Competing Figurative Analyses."
Keyword sweep: Scan output for any keyword triggers above that you may have tagged incorrectly.
Consult data/issues_resolved.txt before finalizing issue classifications.
This document contains content team decisions that override other guidance.
# Search for relevant decisions
cat data/issues_resolved.txt | grep -i "[search term]"
The note templates in data/templates.csv reflect confirmed team decisions on how issues are classified and described. When a template exists for an expression (e.g., "heart" under figs-metaphor), that classification is authoritative. Issue identification should tag issues consistently with how templates classify them.
Note: issue-identification produces explanations, not notes. But the template classifications indicate which support reference to use.
# Check how a term is classified in templates
grep -i "heart" data/templates.csv
Pre-built index of all published translation notes by issue type and keyword. Use for fast precedent lookups instead of raw grep.
Use mcp__workspace-tools__build_tn_index with lookup="hand" for keyword lookups or issue="figs-metaphor" for issue type examples.
Source: data/cache/tn_index.json (built from data/published-tns/)
Precedent evidence is positive-only. Finding examples in the index supports a classification. Finding none is only meaningful if the chapter you searched actually has published TNs. Psalms is partially published — many chapters have no published TNs because AI drafting was adopted before they were completed. Do not cite "no results in this chapter" as evidence against a classification.
Accumulated classification decisions from prior runs. Check before re-deriving:
grep "hand of" data/quick-ref/issue_decisions.csv 2>/dev/null
Source: data/quick-ref/issue_decisions.csv (append-only)
When the index doesn't have what you need, search data/published-tns/ directly:
# Search for issue type patterns
grep -i "figs-metonymy" data/published-tns/tn_1SA.tsv | head -20
grep -i "fallen\|sword" data/published-tns/tn_*.tsv
| Tool | Purpose |
|---|---|
mcp__workspace-tools__fetch_door43 | Fetch USFM from Door43 (supports type="ust" for UST) |
parse_usfm.js (node) | Parse USFM, extract alignments and plain text (usfm-js) |
mcp__workspace-tools__compare_ult_ust | Compare ULT/UST plain text to identify divergences suggesting issues |
mcp__workspace-tools__detect_abstract_nouns | Find abstract nouns (591 word list). Use text="..." for plain English |
mcp__workspace-tools__check_tw_headwords | Check names/unknowns against tW headwords - filters translate-names/translate-unknown |
mcp__workspace-tools__build_tn_index | Published TN index lookup. lookup="hand" for keyword, issue="figs-metaphor" for issue type |
During verse-by-verse analysis, watch for passages where meaning is genuinely unclear:
Pronoun Reference Ambiguity (tag: writing-pronouns)
Lexical Polysemy (tag: figs-explicit or existing figure type)
Idiomatic Uncertainty (tag: figs-idiom)
Ellipsis with Multiple Resolutions (tag: figs-ellipsis or figs-explicit)
Detection signals:
Explanation field format for TCM notes:
When flagging ambiguity that requires a "this could mean" note, use TCM keyword plus i: prefix with numbered options:
Format: TCM i:(1) [option A] (2) [option B]
Examples:
job 9:35 figs-idiom I am not so with myself TCM i:(1) I do not consider myself guilty (2) I am not in my right mind from fear
job 9:3 writing-pronouns he wished to contend TCM i:(1) God (2) a person who wanted to contend with God
1jn 4:3 figs-explicit is not from God TCM i:(1) sent by God (2) having God as its source
The TCM trigger tells the note writer to format using "This could mean (1)... or (2)..." structure while still using the issue type's template for context.
Web search as fallback: When internal resources (Issues Resolved, published TNs, Translation Academy) don't clarify a potentially ambiguous passage:
"[book] [chapter]:[verse] interpretation" or "[Greek/Hebrew term] meaning"Fallback tag: When ambiguity doesn't fit existing categories, use figs-explicit with note explaining the interpretive options.
See reference/ambiguity_patterns.md for detailed examples from published notes.
--categories all to force all categories.After identifying issues, output a tab-separated file to output/issues/:
output/issues/[BOOK]/[BOOK]-[CHAPTER].tsv
Examples:
output/issues/PSA/PSA-063.tsv - Psalm 63output/issues/GEN/GEN-01.tsv - Genesis 1output/issues/2SA/2SA-01.tsv - 2 Samuel 1Use three-letter book codes and three-digit chapter numbers (zero-padded).
Format:
[book]\t[chapter:verse]\t[supportreference]\t[ULT text]\t\t\t[explanation if needed]
| Column | Description |
|---|---|
| book | 3-letter abbreviation (psa, gen, mat, etc.) |
| chapter:verse | Single-verse reference (78:17). Never use verse ranges — see rules below. |
| supportreference | Issue type (figs-metaphor, writing-pronouns, etc.) |
| ULT text | English phrase copied verbatim from the ULT — exact words, exact inflections, from one verse only |
| (empty) | Reserved |
| (empty) | Reserved |
| explanation | Brief note if issue not obvious from text (optional) |
Reference and quote rules:
translate-versebridge, which spans two verses by definition.& to join only genuinely discontinuous phrases — where unrelated text separates the relevant words in the verse. If the phrases are adjacent or separated only by punctuation/conjunctions, expand the quote to include the connecting text instead. Good & use: two different referents for "these" separated by a clause. Bad & use: "oppose my opponents & fight those fighting me" when the ULT reads "oppose my opponents; fight those fighting me" — just quote the full span.Ordering: Within each verse, output issues in ULT reading order:
Example for "For you are a refuge to me, a strong tower from the face of the enemy":
psa 61:3 figs-metaphor For you are a refuge to me, a strong tower from the face of the enemy
psa 61:3 figs-metaphor For you are a refuge to me
psa 61:3 figs-metaphor a refuge
psa 61:3 figs-metonymy from the face of the enemy
psa 61:3 figs-possession of the enemy
General example:
psa 78:17 writing-pronouns And they added ancestors/israelites
psa 78:19 figs-rquestion Is God able rhetorical - asserting doubt
gen 1:5 figs-infostructure evening and morning time phrase order
94 issue types organized into 7 categories: Discourse Structure, Grammar, Clause Relations, Figures of Speech, Speech Acts, Information Management, Cultural/Reference.
For the full catalog with links to each issue skill, see reference/issue-types-catalog.md.
For detailed recognition guidance, consult the individual issue skill files.
To create a skill for a new translation issue:
../utilities/create-issue-skill.mddata/translation-issues.csv for issue list and tracking