The format for guidelines documents for Plexus scorecard scores and the validation tool.
This skill helps create and update classifier guidelines documents for Plexus scorecard score configurations. The user will provide you with information from subject matter experts that will commonly come from emails, chat messages, or other documents. This source information may be formatted for different audiences (e.g., agent instructions, training materials, operational procedures) rather than for classifier design.
Your job is to transform this information into the Plexus guidelines standard format, which is specifically designed to help distinguish between classification classes. The guidelines must be organized around how to tell the difference between classes (e.g., what makes something "Yes" vs "No"), NOT around operational procedures or agent instructions.
After you make any change to the guidelines you need to use the tool to validate the guidelines file.
You can use the Plexus MCP tools to pull score versions, including either the champion or specific versions. And you can use the MCP tool for pushing new score versions with updated guidelines, after you make changes to the guidelines and validate the changes using the tool in this skill. You may NOT push updates without first validating them, and you may not push guidelines documents that are invalid. Making changes to the score configuration is out of scope for this skill, this is all about the guidelines.
Plexus (AnthusAI/Plexus on GitHub) uses human-readable Guidelines documents (Markdown format) alongside YAML-based classifier configurations. The Guidelines express how to make classification decisions and serve as the source of truth for alignment between human subject-matter experts (SMEs), AI/ML engineers, and the LLM-based classifiers.
There are three types of classifier guidelines, each with required elements:
Required sections (marked with *):
Optional sections:
Required sections (marked with *):
Optional sections:
Required sections (marked with *):
Optional sections:
When creating or updating guidelines:
Use the validate_guidelines.py tool to check guidelines documents for compliance. The tool:
Always run this tool after creating or updating guidelines documents.
Usage:
python validate_guidelines.py guidelines.md
Exit codes:
Example output:
Validation Results for: content_moderation.md
Classifier Type: Binary
Detected binary classifier
✓ All required sections present
Found Sections:
• Objective ✓
• Classes ✓
• Definition of No ✓
• Conditions for No ✓
• Definition of Yes ✓
• Examples (optional)
✓ Validation PASSED
Creating new guidelines:
Updating existing guidelines:
When using this skill:
Example interaction:
User: Create guidelines for a spam classifier
Claude: I'll help you create guidelines for a spam classifier. First, let me understand the structure.
Is this a binary classifier (Spam/Not Spam) or does it need an NA class for uncertain cases?
User: Binary - just Spam or Not Spam
Claude: Got it. For a binary classifier, I need:
- Objective (what is the purpose of this classifier?)
- Target class (which class are we trying to detect - I assume "Spam"?)
- Default class (what do we classify when conditions aren't met - I assume "Not Spam"?)
- Definition of Spam
- Conditions for identifying Spam
- Definition of Not Spam
Let's start with the Objective. What should this classifier do?
[User provides information]
[Claude creates draft guidelines]
Let me validate this:
[Runs validation tool]
[Shows results]
The guidelines are now complete and valid. Would you like to add optional Examples sections?
Example transformation scenario:
User: Here are the agent instructions for branding requirements: "Agents must state school name, program name, degree type, and modality."
Claude: I need to transform these agent instructions into classification criteria. Let me think about how to distinguish between Yes and No:
For a binary classifier about branding compliance:
- "Conditions for No" would include: "Rep fails to state school name" (converting "must state" into a failure condition)
- "Conditions for Yes" would include: "Rep states school name" (the positive case)
I also notice this rule might have conditional logic. Does the degree type requirement depend on anything (like whether it's in metadata)?
[User clarifies conditional requirements]
[Claude creates guidelines with explicit conditional logic in both Conditions for No and Conditions for Yes sections]