Run the full target-to-hypothesis pipeline for metamaterial design. Use when a user provides a target EM specification and you need to generate ranked design hypotheses with CST simulation specs and evidence-backed rationale.
End-to-end pipeline that produces ranked, CST-simulation-ready metamaterial design hypotheses from a target specification.
Phase 1: Target → Structured Spec (Steps 0-1, interactive)
Phase 2: Literature Search → Ranked Hypotheses (Steps 2-7, automated)
Phase 3: Deep Analysis & Figure Report (Step 8, semi-automated)
Use target-clarifier skill. If the user's description is vague, ask about:
Use D:/Claude/target_to_hypothesis/config/material_database.yaml for available substrate/conductor options.
Use target-interpreter skill.
cd D:/Claude && python -c "
from target_to_hypothesis.skills.target_interpreter import interpret_target
import json
target = interpret_target(
description='USER_DESCRIPTION_HERE',
freq_min_ghz=FREQ_MIN,
freq_max_ghz=FREQ_MAX,
constraints={...},
)
print(json.dumps(target.model_dump(), indent=2, default=str))
"
Step 2: Query Planner → LiteratureQueryPlan (query-planner)
Step 3: Paper Retrieval → PaperCandidate list (scholar-crawler, primary)
Step 4: Frequency Filter → Filtered papers (frequency-filter)
Step 5: Paper Reader → PaperRecord list (paper-reader, LLM abstract classification)
Step 6: Evidence Grader → EvidenceGradeReport (evidence-grader)
Step 7: Hypothesis Generator → RankedHypotheses + CSTSpecs (hypothesis-generator → hypothesis-ranker)
Papers are analyzed by their actual structures/geometries as described by the authors. There is NO family mapping step — no predefined ontology is imposed.
cd D:/Claude && python -c "
import json, sys
sys.path.insert(0, '.')
from target_to_hypothesis.pipelines.run_target_to_hypothesis import run_pipeline, PipelineConfig
config = PipelineConfig(
max_hypotheses=5,
total_paper_limit=30,
save_artifacts=True,
)
result = run_pipeline(
description='USER_DESCRIPTION_HERE',
freq_min_ghz=FREQ_MIN,
freq_max_ghz=FREQ_MAX,
constraints={...},
config=config,
)
print(f'Papers retrieved: {len(result.paper_candidates)}')
print(f'Evidence quality: {result.evidence_grades.overall_evidence_quality}')
print(f'Hypotheses: {len(result.ranked_hypotheses.hypotheses)}')
for i, h in enumerate(result.ranked_hypotheses.hypotheses):
print(f' {i+1}. {h.family_display_name or h.family} (score={h.score:.3f})')
print(f'Recommendation: {result.ranked_hypotheses.recommendation}')
"
If Semantic Scholar is rate-limited, use scholar-crawler skill (browser-based Google Scholar) as primary retrieval.
Optional: Use embedding-searcher skill to re-rank papers by semantic similarity if the initial retrieval is noisy.
This phase extracts full text + figures from the top-ranked papers (typically top 10). Three methods are attempted in priority order.
For each top paper, check open-access status via OpenAlex:
curl -s "https://api.openalex.org/works/doi:DOI_HERE" | python -c "
import json, sys
w = json.load(sys.stdin)
oa = w.get('open_access', {})
print(f'OA: {oa.get(\"is_oa\")}, URL: {oa.get(\"oa_url\")}')"
Best approach: extract figure image URLs directly from the browser DOM, then download via curl.
Use browser-paper-reader skill. Steps:
Navigate to paper page in MCP browser (campus access):
navigate(url="https://doi.org/DOI", tabId=tab_id)
Extract all image URLs from DOM:
javascript_tool(text="Array.from(document.querySelectorAll('img')).map(i => ({src: i.src, alt: i.alt, w: i.naturalWidth, h: i.naturalHeight}))", tabId=tab_id)
Filter for figure images (width > 300px, alt contains "Fig"/"figure", URL contains "figure"/"graphic")
Download via curl:
curl -L -o D:/Claude/artifacts/figures/paperN_author_fig_M.png "IMAGE_URL"
Extract page text:
get_page_text(tabId=tab_id)
Why: Publisher CDNs often serve images without auth — only the HTML page is paywalled.
When Method A fails (CDN also requires auth):
cd D:/Claude && python artifacts/upload_server.py &cd D:/Claude && python artifacts/papers/process_all.py
D:/Claude/artifacts/figures/{prefix}_fig_{N}.pngD:/Claude/artifacts/papers/{prefix}.mdAxiomatic API key: stored in CLAUDE.md or provided by user.
For papers where neither method works:
Use phase3-analyze-paper skill:
cd D:/Claude && python -c "
from target_to_hypothesis.skills.phase3_paper_analyzer import analyze_paper
from target_to_hypothesis.utils.llm import make_llm_fn
llm_fn = make_llm_fn(max_tokens=3000)
result = analyze_paper(title='...', page_text='...', page_url='...', target_description='...', llm_fn=llm_fn)
"
Use phase3-synthesize skill to find common patterns, mechanisms, and design recommendations across all analyzed papers.
Use phase3-report skill. Build D:/Claude/artifacts/reports/phase3_report.md with:
D:/Claude/target_to_hypothesis/config/mechanism_ontology.yaml — 12 physical mechanismsD:/Claude/target_to_hypothesis/config/material_database.yaml — substrate/conductor propertiesD:/Claude/target_to_hypothesis/config/scoring_weights.yaml — scoring weightsD:/Claude/artifacts/upload_server.py — User PDF upload server (port 18800, start only when needed, stop when user says done)D:/Claude/artifacts/papers/process_all.py — Batch Axiomatic PDF processorD:/Claude/target_to_hypothesis/pipelines/run_target_to_hypothesis.pyD:/Claude/artifacts/papers/ — Downloaded PDFs and parsed markdownD:/Claude/artifacts/figures/ — Extracted figure PNGsD:/Claude/artifacts/reports/ — Generated reportsPhase 1: target-clarifier, target-interpreter Phase 2: query-planner, scholar-crawler, embedding-searcher, frequency-filter, paper-reader, evidence-grader, hypothesis-generator, hypothesis-ranker, hypothesis-versioning Phase 3: browser-paper-reader, phase3-analyze-paper, phase3-synthesize, phase3-report