Scan LLM endpoints for prompt injection vulnerabilities using TMAS AI Scanner.
Scan LLM endpoints for prompt injection and jailbreak vulnerabilities.
DEFAULT MODE: Automatically detect LLM endpoints and models. NO PROMPTS unless ambiguous.
ADVANCED MODE: If $ARGUMENTS contains --advanced or -a, use AskUserQuestion for all options.
Run these checks and stop immediately if any fail:
# Check TMAS CLI is installed
if ! command -v tmas &>/dev/null && ! ~/.local/bin/tmas version &>/dev/null; then
echo "ERROR: TMAS CLI not installed"
echo "Run /trendai-setup to install"
exit 1
fi
# Check API key is set
if [ -z "$TMAS_API_KEY" ]; then
echo "ERROR: TMAS_API_KEY not set"
echo "Run /trendai-setup to configure"
exit 1
fi
# Show status
tmas version
echo "TMAS_API_KEY: SET (${#TMAS_API_KEY} chars)"
echo "TARGET_API_KEY: ${TARGET_API_KEY:+SET}${TARGET_API_KEY:-NOT SET (only needed for authenticated endpoints)}"
STOP HERE if prerequisites fail. Tell user to run /trendai-setup first.
Check what the user provided:
ARGS="$ARGUMENTS"
# Check for advanced mode
if [[ "$ARGS" == *"--advanced"* ]] || [[ "$ARGS" == *"-a"* ]]; then
ADVANCED_MODE=true
ARGS=$(echo "$ARGS" | sed 's/--advanced//g' | sed 's/-a//g' | xargs)
else
ADVANCED_MODE=false
fi
# Check if arg is a config file
if [[ "$ARGS" == *.yaml ]] || [[ "$ARGS" == *.yml ]]; then
CONFIG_FILE="$ARGS"
fi
# Check if arg is an endpoint URL
if [[ "$ARGS" == http* ]]; then
ENDPOINT_URL="$ARGS"
fi
# Check if arg is a model name (for Ollama)
if [[ -n "$ARGS" ]] && [[ "$ARGS" != *.yaml ]] && [[ "$ARGS" != http* ]]; then
MODEL_NAME="$ARGS"
fi
If $ARGUMENTS is a .yaml or .yml file, read it and skip to Step 8.
If no endpoint provided, ask what type:
Use AskUserQuestion:
Once you know the endpoint type, AUTOMATICALLY discover available models. Do NOT ask the user to specify a model manually.
# For Ollama - automatically list models
ollama list 2>/dev/null
# For LM Studio - query the API
curl -s http://localhost:1234/v1/models 2>/dev/null | jq -r '.data[].id'
# For Ollama via API (alternative)
curl -s http://localhost:11434/api/tags 2>/dev/null | jq -r '.models[].name'
After discovering models:
| Models Found | Action |
|---|---|
| 0 models | Tell user: "No models found. Run ollama pull llama3.2 first." |
| 1 model | Use it automatically, tell user which one |
| 2+ models | ASK which model using AskUserQuestion |
If multiple models found, use AskUserQuestion:
For OpenAI/remote APIs: Ask user to provide the model name (gpt-4, gpt-3.5-turbo, etc.)
Use these defaults unless --advanced mode:
# Default attack objectives (all 4 main ones)
attack_objectives:
- System Prompt Leakage
- Sensitive Data Disclosure
- Agent Tool Definition Leakage
- Malicious Code Generation
# Default: baseline only (fastest)