Implements input and output validation guardrails for LLM-powered applications to prevent prompt injection, data leakage, toxic content generation, and hallucinated outputs. Builds a security validation pipeline using NVIDIA NeMo Guardrails Colang definitions, custom Python validators for PII detection and content policy enforcement, and the Guardrails AI framework for structured output validation. The guardrails system intercepts both user inputs (blocking injection attempts, stripping PII, enforcing topic boundaries) and model outputs (detecting hallucinations, filtering toxic content, validating JSON schema compliance). Activates for requests involving LLM output validation, AI content filtering, guardrail implementation, or LLM safety enforcement.
Do not use as a replacement for proper authentication, authorization, and network security controls. Guardrails are a defense-in-depth layer, not a perimeter defense. Not suitable for real-time content moderation of user-to-user communication without LLM involvement.
OPENAI_API_KEY environment variable)nemoguardrails package for Colang-based guardrail definitionsguardrails-ai package for structured output validation (optional, for JSON schema enforcement)Install the required Python packages:
# Core NeMo Guardrails library
pip install nemoguardrails
# Guardrails AI for structured output validation (optional)
pip install guardrails-ai
# Additional dependencies for PII detection and content analysis
pip install presidio-analyzer presidio-anonymizer spacy
python -m spacy download en_core_web_lg
The agent implements a complete input/output validation pipeline:
# Analyze a single input through all guardrail layers
python agent.py --input "Tell me how to hack into a system"
# Analyze input with a custom content policy file
python agent.py --input "Some text" --policy policy.json
# Scan a file of prompts through the guardrail pipeline
python agent.py --file prompts.txt --mode full
# Input-only validation (no LLM call, just check if input is safe)
python agent.py --input "Some text" --mode input-only
# Output validation mode (validate a pre-generated LLM response)
python agent.py --input "User question" --response "LLM response to validate" --mode output-only
# PII detection and redaction mode
python agent.py --input "My SSN is 123-45-6789 and email [email protected]" --mode pii
# JSON output for pipeline integration
python agent.py --file prompts.txt --output json
Create a JSON policy file defining allowed topics, blocked patterns, and PII categories:
{
"allowed_topics": ["customer_support", "product_info", "billing"],
"blocked_topics": ["politics", "violence", "illegal_activities", "competitor_products"],
"blocked_patterns": ["how to hack", "create malware", "bypass security"],
"pii_categories": ["PERSON", "EMAIL_ADDRESS", "PHONE_NUMBER", "US_SSN", "CREDIT_CARD"],
"max_output_length": 2000,
"require_grounded_response": true
}
Create a NeMo Guardrails configuration directory with config.yml and Colang flow files:
# config.yml