AI-powered security threat prediction and prevention system
AI Threat Analyzer predicts and prevents security vulnerabilities by analyzing code, dependencies, infrastructure-as-code, and configuration files using a hybrid approach combining static analysis, machine learning models, and threat intelligence.
Real Use Cases:
This skill provides the following commands:
openclaw skill ai-threat-analyzer scan-code [options] <path>Performs AI-enhanced static analysis on source code.
Flags:
--language=<lang> - Target language: python, javascript, go, java, all (default: auto-detect)--severity=<level> - Minimum severity: low, medium, high, critical (default: medium)--model=<path|huggingface> - Custom model path or HuggingFace ID (default: env OPENCLAW_AI_MODEL)--confidence=<0.0-1.0> - AI confidence threshold (default: 0.7)--context-lines=<n> - Include N lines of context in reports (default: 5)--exclude=<pattern> - Exclude paths matching glob pattern (repeatable)--output=<format> - Output format: json, sarif, html, terminal (default: terminal)--include-suppressed - Include suppressed/false positive predictionsExample:
openclaw skill ai-threat-analyzer scan-code ./services/api --language python --severity medium --output sarif --context-lines 3
openclaw skill ai-threat-analyzer scan-deps [options] <manifest>Analyzes dependencies for known CVEs, supply chain risks, and suspicious patterns.
Flags:
--type=<type> - Manifest type: npm, pip, cargo, gomod, all (default: auto)--depth=<n> - Dependency tree depth to analyze (default: 3, max: 10)--include-dev - Include devDependencies (default: false)--check-license - Flag restrictive licenses (default: true)--taint-tracking - Trace data flow from vulnerable deps to code (default: true)--output=<format> - json, table, github-annotations--fix-pr - Create PR with automated updates for vulnerable depsExample:
openclaw skill ai-threat-analyzer scan-deps ./package.json --taint-tracking --output github-annotations
openclaw skill ai-threat-analyzer scan-infra [options] <path>Scans infrastructure-as-code for security misconfigurations.
Flags:
--iac-types=<list> - Comma-separated: terraform,kubernetes,cloudformation,docker,helm (default: all)--policy=<path> - Custom OPA/Open Policy Agent policy bundle--cloud-provider=<provider> - Context for cloud-specific checks: aws, gcp, azure, all-- simulate-attacks - Generate attack scenarios based on misconfigurations (default: false)--output=<format> - json, sarif, cli-tableExample:
openclaw skill ai-threat-analyzer scan-infra ./infra --iac-types terraform,kubernetes --cloud-provider aws --simulate-attacks
openclaw skill ai-threat-analyzer predict-threat [options] <code-or-config>AI model predicts how a code change or configuration might be exploited in the next 90 days.
Flags:
--timeframe=<days> - Prediction window: 30, 60, 90 days (default: 90)--asset-type=<type> - Asset classification: public-facing, internal, pci, hipaa, generic
-- --patch-rush - Accelerated prediction for emergency patches (reduced accuracy, faster)Example:
openclaw skill ai-threat-analyzer predict-threat ./src/auth.py --asset-type public-facing
openclaw skill ai-threat-analyzer apply-fixes [options] <scan-result>Automatically applies AI-recommended fixes where confidence is >90%.
Flags:
--dry-run - Show fixes without applying (default: false)--max-fixes=<n> - Maximum fixes to apply in one run (default: 10)--require-approval - Interactive approval per fix (default: true)--backup-dir=<path> - Create backups before modifications--git-commit - Create git commit for each applied fix--pr - Create pull request with all fixes instead of direct applyExample:
openclaw skill ai-threat-analyzer apply-fixes ./scan-results.json --pr --require-approval --backup-dir ./backups
openclaw skill ai-threat-analyzer train-model [options] <dataset>Fine-tunes the AI model on your organization's historical vulnerability data.
Flags:
--base-model=<hf-id> - Base HuggingFace model (default: microsoft/codebert-base)--epochs=<n> - Training epochs (default: 5)--batch-size=<n> - Training batch size (default: 16)--validation-split=<float> - Validation split ratio (default: 0.2)--output-dir=<path> - Where to save fine-tuned model (default: ./models/threat-analyzer)--push-to-hub - Push to HuggingFace Hub (requires AUTH_TOKEN)Example:
openclaw skill ai-threat-analyzer train-model ./historical-vulns.jsonl --epochs 10 --output-dir ./models/custom-threat-analyzer --batch-size 32
openclaw skill ai-threat-analyzer explain-finding <finding-id>Provides detailed, context-aware explanation of a specific vulnerability with remediation steps.
Flags:
--format=<format> - text, markdown, json (default: markdown)--audience=<level> - Tailor explanation: developer, security-engineer, manager (default: developer)--include-code-samples - Provide secure vs vulnerable code examples (default: true)Example:
openclaw skill ai-threat-analyzer explain-finding SQLI-2024-12345 --audience developer --format markdown
Preparation
# Set confidence threshold for your risk appetite
export THREAT_SCAN_CONFIDENCE=0.75
# Point to custom model if available
export OPENCLAW_AI_MODEL="./models/fine-tuned-threat-analyzer"
Run multi-layer scan
# Parallel scan of code, dependencies, and infrastructure
openclaw skill ai-threat-analyzer scan-code ./src --output json > code-scan.json &
openclaw skill ai-threat-analyzer scan-deps ./package.json --output json > deps-scan.json &
openclaw skill ai-threat-analyzer scan-infra ./infra --output json > infra-scan.json &
wait
# Combine results
cat code-scan.json deps-scan.json infra-scan.json | jq -s 'add' > combined-threats.json
Review predictions
# Get top 10 critical findings with AI explanations
openclaw skill ai-threat-analyzer explain-finding --audience security-engineer --format json $(jq -r '.[] | select(.severity=="critical") | .id' combined-threats.json | head -10) > critical-explanations.json
# Generate prioritized action plan
openclaw skill ai-threat-analyzer predict-threat ./src --asset-type public-facing > threat-predictions.json
Apply fixes with validation
# Create PR with fixes (review required)
openclaw skill ai-threat-analyzer apply-fixes combined-threats.json --pr --require-approval --max-fixes 20 --backup-dir ./threat-fix-backups
# After PR approval and merge, verify fixes
# (re-run the scan to confirm issues resolved)
Continuous improvement
# Fine-tune on false positives/negatives to improve accuracy
# Export reviewed findings to training dataset
jq '[.[] | select(.reviewed==true)]' threat-reviews.jsonl > training-data.jsonl
# Retrain model monthly
openclaw skill ai-threat-analyzer train-model ./training-data.jsonl --epochs 3 --output-dir ./models/monthly-$(date +%Y%m)
# .github/workflows/threat-scan.yml