Use when writing or improving a reduction-rule entry in the Typst paper (docs/paper/reductions.typ)
Full authoring guide for writing a reduction-rule entry in docs/paper/reductions.typ. Covers Typst mechanics, writing quality, and verification.
Note: This content is also inlined in
add-ruleStep 5 (condensed form). This standalone version has more detail and is useful for improving existing entries.
KColoring → QUBO in docs/paper/reductions.typ is the gold-standard reduction example. Search for reduction-rule("KColoring", "QUBO" to see the complete entry. Use it as a template for style, depth, and structure.
Before using this skill, ensure:
src/rules/<source>_<target>.rs)src/example_db/rule_builders.rsmake regenerate-fixtures)cargo run --example export_graph && cargo run --example export_schemas)For mathematical content (theorems, proofs, examples), consult these sources in priority order:
gh issue view <number>): contains the verified reduction algorithm, correctness proof, size overhead, and worked examples written during issue creation~/Downloads/reduction_derivations_*.typ — these contain batch-verified proofs with explicit theorem/proof blockssrc/rules/<source>_<target>.rs): the code is the ground truth for the constructionDo NOT invent proofs — always cross-check against the issue and derivation sources. The issue body typically has: Reduction Algorithm, Correctness (forward/backward), Size Overhead, Example, and References sections that map directly to the paper entry structure.
#let src_tgt = load-example("Source", "Target")
#let src_tgt_sol = src_tgt.solutions.at(0)
Where:
load-example(source, target, ...) looks up the canonical rule entry from src/example_db/fixtures/examples.jsonsource, target, and solutionssrc_tgt.source.instance, src_tgt.target.instance, src_tgt_sol.source_config, src_tgt_sol.target_configThe theorem body is a concise block with three parts:
State the reduction's time complexity with a citation. Examples:
% With verified reference:
This $O(n + m)$ reduction @Author2023 constructs ...
% Without verified reference — add footnote:
This $O(n^2)$ reduction#footnote[Complexity not independently verified from literature.] constructs ...
Verification: Identify the best known reference for this reduction's complexity. If you cannot find a peer-reviewed or textbook source, you MUST add the footnote.
One sentence describing what the reduction builds:
... constructs an intersection graph $G' = (V', E')$ where ...
State target dimensions in terms of source. This complements the auto-derived overhead (which appears automatically from JSON edge data):
... ($n k$ variables indexed by $v dot k + c$).
][
Given $G = (V, E)$ with $k$ colors, construct upper-triangular
$Q in RR^(n k times n k)$ using one-hot encoding $x_(v,c) in {0,1}$
($n k$ variables indexed by $v dot k + c$).
]
The proof must be self-contained (all notation defined before use) and reproducible (enough detail to reimplement the reduction from the proof alone).
Use these subsections in order. Use italic labels exactly as shown:
][
_Construction._ ...
_Correctness._ ...
_Variable mapping._ ... // only if the reduction has a non-trivial variable mapping
_Solution extraction._ ...
]
Full mathematical construction of the target instance. Define all symbols and notation here.
For standard reductions (< 300 LOC): Write the complete construction with enough math to reimplement.
For heavy reductions (300+ LOC): Briefly describe the approach and cite a reference:
_Construction._ The reduction follows the standard Cook–Levin construction @Cook1971,
encoding each gate as a set of clauses. See @Source for full details.
Bidirectional (iff) argument showing solution correspondence. Use ($arrow.r.double$) and ($arrow.l.double$) for each direction:
_Correctness._ ($arrow.r.double$) If $S$ is independent, then ...
($arrow.l.double$) If $C$ is a vertex cover, then ...
Explicitly state how source variables map to target variables. Include this section when the mapping is non-trivial (encoding, expansion, reindexing). Omit for identity mappings or trivial complement operations.
_Variable mapping._ Vertices $= {S_1, ..., S_m}$, edges $= {(S_i, S_j) : S_i inter S_j != emptyset}$, $w(v_i) = w(S_i)$.
How to convert a target solution back to a source solution:
_Solution extraction._ For each vertex $v$, find $c$ with $x_(v,c) = 1$.
Detailed by default. Only use a brief example for trivially obvious reductions (complement, identity).
#reduction-rule("Source", "Target",
example: true,
example-caption: [Description ($n = ...$, $|E| = ...$)],
extra: [
#pred-commands(
"pred create --example <ALIAS> -o <name>.json",
"pred reduce <name>.json --to " + target-spec(src_tgt) + " -o bundle.json",
"pred solve bundle.json",
"pred evaluate <name>.json --config " + src_tgt_sol.source_config.map(str).join(","),
)
// Optional: graph visualization
#{
// canvas code for graph rendering
}
*Step 1 -- [action].* [description with concrete numbers]
*Step 2 -- [action].* [construction details]
// ... more steps as needed
*Step N -- Verify a solution.* [end-to-end verification]
*Multiplicity:* The fixture stores one canonical witness. If total multiplicity matters, explain it from the construction.
],
)
Add a pred-commands() block at the top of the extra: content, before any data display or visualization. The source-side pred create --example ... spec must be derived from the loaded example data, not handwritten:
#let problem-spec(data) = {
if data.variant.len() == 0 { data.problem }
else { data.problem + "/" + data.variant.values().join("/") }
}
#pred-commands(
"pred create --example " + problem-spec(src_tgt.source) + " -o <source>.json",
"pred reduce <source>.json --to " + target-spec(src_tgt) + " -o bundle.json",
"pred solve bundle.json",
"pred evaluate <source>.json --config " + src_tgt_sol.source_config.map(str).join(","),
)
If docs/paper/reductions.typ already defines a shared problem-spec() helper, reuse it instead of duplicating it. Do not guess whether the default variant matches the canonical example; canonical rule fixtures can point at non-default source variants, and handwritten bare aliases can silently break the reproducibility block.
The target-spec() helper handles empty variant dicts automatically. The --config is composed from src_tgt_sol.source_config.
Each step should:
*Step K -- [verb phrase].*| Step | Content |
|---|---|
| First | Show the source instance (dimensions, structure). Include graph visualization if applicable. |
| Middle | Walk through the construction. Show intermediate values. Explicitly quantify overhead. |
| Second-to-last | Verify a concrete solution end-to-end (source config → target config, check validity). |
| Last | State that the fixture stores one canonical witness; if multiplicity matters, justify it mathematically from the construction. |
#{
let fills = src_tgt_sol.source_config.map(c => graph-colors.at(c))
align(center, canvas(length: 0.8cm, {
for (u, v) in graph.edges { g-edge(graph.vertices.at(u), graph.vertices.at(v)) }
for (k, pos) in graph.vertices.enumerate() {
g-node(pos, name: str(k), fill: fills.at(k), label: str(k))
}
}))
}
// Source configuration (e.g., color assignments)
#src_tgt_sol.source_config.map(str).join(", ")
// Target configuration (e.g., binary encoding)
#src_tgt_sol.target_config.map(str).join(", ")
// The canonical witness pair
#src_tgt.solutions.at(0)
// Source instance fields
#src_tgt.source.instance.num_vertices
If this is a new problem not yet in the paper, add to the display-name dictionary near the top of reductions.typ:
"ProblemName": [Display Name],
# Build the paper
make paper
load-example/load-results, not hardcodedsolutions.at(0) as the canonical witness; any multiplicity claim is derived mathematically, not from fixture lengthpred-commands() block at top of example with create/reduce/solve/evaluate pipelinemake paper succeeds without errorsFor simpler reductions, see MinimumVertexCover ↔ MaximumIndependentSet as a minimal example.
Edit PDFs with natural-language instructions using the nano-pdf CLI.