Generate measurable learning outcomes aligned with Bloom's taxonomy and CEFR proficiency levels for educational content.
Use this skill when educators need to define what students will achieve, create learning objectives
for curriculum planning, or ensure objectives are specific and testable rather than vague.
This skill helps break down complex topics into progressively building learning goals with clear
assessment methods and success criteria.
aiskillstore257 Sterne20.01.2026
Beruf
Kategorien
Vertrieb & Marketing
Skill-Inhalt
Purpose
Enable educators to create measurable, actionable learning objectives aligned with Bloom's taxonomy and CEFR proficiency levels. This skill helps:
Define what students will achieve (not just what topics they'll cover)
Ensure objectives are specific and testable (not vague)
Identify prerequisites and scaffold learning progressively
Plan appropriate assessment methods
Sequence learning from basic recall to creative synthesis
Map to international proficiency standards (CEFR A1-C2) for portability
Include AI co-learning outcomes (working WITH AI, not just independently)
Constitution v4.0.1 Alignment: This skill implements evals-first objective design—defining success criteria BEFORE creating learning objectives, integrating CEFR proficiency levels (Principle 5: Progressive Complexity), and incorporating Section IIb (AI Three Roles Framework) co-learning outcomes.
When to Activate
Use this skill when:
Planning curriculum or lesson design and need to define learning outcomes
Verwandte Skills
Creating assessments and want to align them with clear objectives
Designing a course and need measurable outcomes for accreditation
Educators ask to "define objectives", "create learning goals", "set outcomes", or "what should students achieve?"
Reviewing existing objectives and wondering if they're specific enough
Designing a lesson and unsure what students should be able to do by the end
In what context? (the specific situation or problem)
How will we know they succeeded? (measurable criteria)
What proficiency level? (CEFR A1-C2)
CEFR Proficiency Mapping (Constitution v3.1.2)
Align objectives with international proficiency standards (from skills-proficiency-mapper v2.0):
A1 (Beginner - Recognition):
Bloom's: Remember/Understand only
Example: "Identify Python syntax for defining a function"
Measurable: Recognition, not production
A2 (Elementary - Guided Application):
Bloom's: Understand/Apply with scaffolding
Example: "Complete a function definition with provided hints"
Measurable: Application with support
B1 (Intermediate - Independent Application):
Bloom's: Apply independently
Example: "Implement a function from clear specification without assistance"
Measurable: Real-world application without scaffolding
B2 (Upper-Intermediate - Analysis):
Bloom's: Analyze/Evaluate
Example: "Compare two implementations and justify which is more maintainable"
Measurable: Evaluation with justification
C1 (Advanced - Creation/Synthesis):
Bloom's: Evaluate/Create
Example: "Design a system architecture for scalable deployment"
Measurable: Original design with trade-off analysis
Proficiency Progression Rule: Lessons should progress A1→A2→B1 within a chapter (not jump from A1 to C1).
Three-Role AI Partnership Objectives (Section IIb, Constitution v4.0.1)
CRITICAL: AI-native learning objectives must include ability to work WITH AI in bidirectional co-learning partnership (per Section IIb forcing functions), not just independently.
Traditional Objective Format:
LO-001: Implement user authentication (independent skill)
AI-Native Objective Format:
LO-001: Implement user authentication working with AI as co-learning partner
- Use AI as Teacher: Learn security patterns from AI suggestions
- Use AI as Student: Refine AI's output through clear specifications
- Use AI as Co-Worker: Iterate toward optimal solution collaboratively
- Validate: Verify AI-generated code meets security requirements
Three-Role Objective Types:
1. AI as Teacher Objectives (Student learns from AI):
"Identify pattern suggested by AI that improves code quality"
"Explain trade-offs in AI's proposed approaches"
"Apply AI-suggested pattern to new context"
2. AI as Student Objectives (Student teaches AI):
"Write specification that produces correct code on first try"
"Provide feedback that improves AI's next iteration"
"Clarify requirements when AI asks for disambiguation"
3. AI as Co-Worker Objectives (Collaborative iteration):
"Iterate with AI to converge on optimal solution"
"Make strategic decisions while AI handles tactical implementation"
"Validate AI outputs for correctness and appropriateness"
Example AI-Native Objective Set:
- id: "LO-AUTH-001"
statement: "Implement OAuth authentication working with AI as co-learning partner"
blooms_level: "Apply"
cefr_level: "B1"
three_role_integration:
ai_as_teacher: "Learn refresh token rotation pattern from AI suggestion"
ai_as_student: "Guide AI through security requirements via clear spec"
ai_as_coworker: "Iterate on session management approach together"
assessment_method: "Code + reflection: Show implementation AND what you learned from AI"
success_criteria:
- "OAuth implementation works correctly"
- "Student identifies at least one pattern learned from AI"
- "Student demonstrates validation of AI output"
Objective Balance for AI-Native Content:
60-70%: Traditional technical skills
20-30%: Co-learning skills (working WITH AI)
10-20%: Validation/verification skills
Step 6: Validate for Measurability
Once you've generated objectives, invoke the validation script to check they're measurable: