ALWAYS use this skill when producing clarification questions for any skill-building purpose (domain, source, data-engineering, platform). Invoke immediately in the research phase: score candidate dimensions, select top dimensions, run parallel dimension research, and return the complete clarifications.json payload. Do not attempt to produce clarifications without using this skill.
Given a purpose, produce clarification questions to be answered by the user to create a skill fit for purpose for Vibedata.
Always apply the purpose-aware lens:
platform or data-engineering enforce Lakehouse-first constraints explicitly.| Purpose (label or token) | Dimension set |
|---|---|
Business process knowledge (domain) | Domain Dimensions |
| Source system customizations () |
source| Source Dimensions |
Organization specific data engineering standards (data-engineering) | Data-Engineering Dimensions |
Organization specific Azure or Fabric standards (platform) | Platform Dimensions |
purpose to one dimension set from references/dimension-sets.mdreferences/scoring-rubric.mdtopic_relevance is not_relevant.references/consolidation-handoff.mdreferences/schemas.mdclarifications.json object as top-level JSONRead references/dimension-sets.md and select the matching section.
Use the references/scoring-rubric.md to produce scoring-only JSON for all candidate dimensions.
Use that scoring JSON to construct metadata.research_plan which is part of clarifications.json and schema defined in references/schemas.md.
topic_relevance from scoring JSON (relevant|not_relevant).dimensions_evaluated from the count of entries in the candidate_dimension_scores array in scoring JSONdimension_scores from candidate_dimension_scores (name, score, reason, focus, companion_skill).topic_relevance is not_relevant, return canonical minimal/scope-recommendation clarifications output per references/schemas.md with:
metadata.scope_recommendation: truemetadata.warning.code: "all_dimensions_low_score"metadata.warning.message: concise explanation for UImetadata.research_plan present and schema-valid with minimal values per references/schemas.md Scope/Error Minimal Output (including topic_relevance: "not_relevant", zero counts, and empty selected arrays)Apply these only when topic_relevance is relevant.
Update the metadata.research_plan created in Step 2.
selected_dimensions as an array of { name, focus } objects copied from the selected dimension_scores entries.dimensions_selected.For each selected dimension object in metadata.research_plan.selected_dimensions:
references/dimensions/{name}.md (use the selected object's name field as the slug)bypassPermissions.metadata.research_plan.selected_dimensionsWait for all tasks before consolidation.
Use references/consolidation-handoff.md to produce canonical clarifications.json and return.
Return only the canonical clarifications JSON object as top-level output (no wrappers and no additional text).
Before returning:
references/schemas.md exactly.metadata.research_plan is present and schema-valid.metadata.research_plan.selected_dimensions is present as { name, focus } objects aligned to selected dimensions.notes vs answer_evaluator_notes).metadata.warning and metadata.error).All-low-scores behavior:
topic_relevance is not_relevant, emit the minimal scope-recommendation payload from references/schemas.md with metadata.scope_recommendation: true and no dimension fan-out.Runtime resilience:
1 with reason Research task failed, then continue with available results.