Perform stack-aware static code analysis and security-focused source scanning, then convert the output into triaged findings, CI recommendations, and remediation work. Prefer `sec-security-vulnerability-analysis` for dependency, container, and IaC scanning. Prefer `sec-risk-security-review` for design-time threat modeling and compliance review.
Perform stack-aware static code analysis and security-focused source scanning, then turn the output into triaged findings, CI recommendations, and remediation work.
Use this skill when
A task requires SAST, static code analysis, or security-focused source scanning of changed code or the wider repository.
The project needs concrete scanner choices, local execution guidance, or a report that turns findings into tracked work.
A pull request or release candidate needs source-level security review.
Do not use this skill when
The task only needs dependency, secret, container, or IaC vulnerability analysis without source-level findings; use sec-security-vulnerability-analysis for that.
Follow docs/guidelines/shared-operating-policy.md#story-maintenance for backlog, evidence, and follow-up updates tied to this skill.
Quality bars
Static analysis output must distinguish confirmed findings from hotspots or low-confidence findings. Use report.template.md categories: confirmed issues, hotspots, false positives, accepted suppressions.
Suppressions require a formal record with rationale, owner, scope, and review date. Use suppression.template.md; critical findings require a HUS-* approval story in execution-plan/backlog/ before the suppression is committed.
The report must point to remediation paths and linked stories; raw scanner output alone is insufficient.
False positive rate must be <30%; if higher, tune rules or switch tools.
Test coverage must exist for confirmed security issues (coordinator with qa-verification-planner).
CI/CD gates must enforce scanning on every PR and release; SARIF findings must be integrated into code scanning dashboard (coordinate with pipeline-connector).
Failure modes to avoid
Treating any raw scanner output as a confirmed vulnerability without triage or exploitability assessment.
Running broad rule sets with no triage plan; high false positive rates (>50%) cause suppression fatigue and missed signals.
Failing to separate code-quality lint (style, complexity) from security-focused SAST (injection, crypto, auth flaws).
Leaving findings only in tool output without story tracking, evidence, or release notes.
Ignoring SAST blind spots (architecture, auth logic, business logic, crypto misuse). Route to complementary skills (sec-risk-security-review, qa-verification-planner) as needed.
Suppressing findings without owner, rationale, or review date; create a HUS-* approval story in execution-plan/backlog/ for all critical suppressions before committing them.
Assuming SAST alone proves security; always combine with threat modeling, code review, testing, and risk assessment.
Completion checklist
Use docs/guidelines/shared-operating-policy.md#completion-checklist as the default completion gate for this skill.
Detailed workflow
Inventory — Detect languages, frameworks, changed paths, and build model:
High/critical confirmed issues → US-* (user-facing) or HUS-* (human-executable)
Hotspots needing architecture review → Route to sec-risk-security-review; create FEAT-* for design changes
Business-logic security gaps (discovered during triage) → Route to qa-verification-planner for scenario tests
Configuration/deployment issues → Route to pipeline-connector
Verify & evidence:
Re-run scans after fixes; confirm affected rules now pass
Link test cases from qa-verification-planner demonstrating fix prevents exploitation
Update story evidence with scan results, suppression records, and re-test pass
Update release gate: no unreviewed critical findings; all high findings have remediation or documented acceptance
Best-practice notes
OWASP-Aligned Tool Selection
When evaluating SAST tools, OWASP recommends criteria beyond "does it find anything":
Accuracy: False positive (FP) and false negative (FN) rates. Tools with >50% FP are noisy; prefer tools with <30% FP and proven OWASP Benchmark scores (70-80% is typical for mature tools).
Language completeness: Tool must support all primary languages in your project.
Framework understanding: Tools that understand Django, Spring, Rails, React catch framework-specific vulnerabilities better than generic linters.
Buildability: Some tools require compiled code; others (AST/pattern-based) work on incomplete code. Incomplete code support is valuable for fast feedback loops.
SARIF interoperability: Standard output format for integrating findings with code scanning platforms, dashboards, and external tools.
See tool-selection-guide.md for detailed criteria, tool comparisons, and decision tree.
GitHub CodeQL (When Available)
If GitHub integration was selected during configuration:
Recommendation: Start with CodeQL default setup (one-click activation in GitHub). It's the highest-accuracy multi-language option (75-80% OWASP Benchmark) and needs zero workflow coding.
Best for: Multi-language repos, complex data-flow vulnerabilities, long-term baseline building
Action: Record in suppression.template.md; add to .semgrep.yml rule tuning (disable or exclude parameterized calls)
Evidence: Linked code review + test case test_sql_injection_params.py demonstrating parameterized calls block injection
Suppression record:
FP-001:
finding: SQL injection in cursor.execute(query)
reason: Tool doesn't understand ORM parameterized binding; query is template, params are separate
evidence: tests/test_db.py::test_parametrized_prevent_injection
owner: @alice-security
review_date: 2026-09-25
Suppression Policy
Critical rule: All suppressions must have a formal record with rationale, owner, scope, and review date. Critical findings require a HUS-* approval story in execution-plan/backlog/ (approved before the suppression is committed); record the decision using gov-review-gate-management.
Format: Use suppression.template.md for each suppression. Record in code with tool-specific syntax:
# Bandit
os.system(cmd) # nosec: B605 (hardcoded trusted command; reviewed 2026-04-15)
# Semgrep
template = django.template.Template(config) # nosemgrep (config from admin only; test coverage in test_config_injection.py)
# ESLint
eval(expr) // eslint-disable-next-line security/detect-eval (config parsing; input from trusted file)
Strategy:
Suppress as narrowly as possible (single line, not whole function/file)
Only approve suppressions with evidence: test case, code review, or threat model
For critical/high findings: require security reviewer sign-off before suppression
For medium/low with business rationale: product owner also approves
Quarterly review: remove stale suppressions; renew justified ones
See suppression.template.md for detailed format and lifecycle.