Create and administer Cortex Agents. Use for: creating agents, adding tools/skills, managing access grants, and agent administration. Covers the full agent creation workflow including tool selection, REST API creation, verification, optimization, and post-creation admin (granting access, troubleshooting).
Whenever running scripts, make sure to use uv.
The following information will be requested during the workflow:
Step 1 - Administrative Setup:
MY_DATABASE)AGENTS)MY_SALES_AGENT)MY_AGENT_CREATOR_ROLE)Step 2 - Requirements Gathering:
Step 3 - Tool Selection (if using existing tools):
DATA_DB)ANALYTICS)Step 4 - Agent Creation:
snowhouse)This workflow creates a basic agent with placeholder content, with optional semantic view creation/optimization and agent-level optimization:
Goal: Gather basic administrative configuration and create working directory for the agent
Actions:
Ask the user for administrative configuration only:
Let's set up the basic administrative details for your agent.
Where would you like to create your agent?
- Database: [e.g., MY_DATABASE]
- Schema: [e.g., AGENTS]
- Agent Name: [e.g., MY_SALES_AGENT]
What role should I use for agent creation?
- Role: [e.g., MY_AGENT_CREATOR_ROLE]
Note: This role must have CREATE AGENT privilege on <DATABASE>.<SCHEMA>
Construct Fully Qualified Agent Name:
<DATABASE>.<SCHEMA>.<AGENT_NAME>MY_DATABASE.AGENTS.MY_SALES_AGENTCheck if workspace directory exists:
{DATABASE}_{SCHEMA}_{AGENT_NAME}/ directory existsinit_agent_workspace.pyCreate workspace (do NOT create directories manually):
⚠️ ALWAYS use init_agent_workspace.py to create the workspace. Do NOT manually create directories with mkdir. The script creates required files including metadata.yaml.
uv run python ../scripts/init_agent_workspace.py --agent-name <AGENT_NAME> --database <DATABASE> --schema <SCHEMA>
# Example:
uv run python ../scripts/init_agent_workspace.py --agent-name MY_SALES_AGENT --database MY_DATABASE --schema AGENTS
Expected Directory Structure After Running Script:
MY_DATABASE_AGENTS_MY_SALES_AGENT/
├── metadata.yaml
│ # database: MY_DATABASE
│ # schema: AGENTS
│ # name: MY_SALES_AGENT
├── optimization_log.md
└── versions/
└── vYYYYMMDD-HHMM/
└── evals/
Verify after running: Check that metadata.yaml exists in the workspace root before proceeding.
IMPORTANT: After completing Step 1, ask the user if they want to proceed to Step 2 before continuing.
Goal: Understand what the agent should do and what data sources are available
Actions:
Ask the user about the agent's purpose and analytics requirements:
Now let's understand what your agent should do.
What questions or tasks should this agent handle?
Examples:
- Usage statistics and trends?
- Error/failure analysis?
- Feature adoption metrics?
- Performance metrics?
- User behavior patterns?
What data sources do you have available?
- Do you have semantic views, tables, or other data sources?
- What's the structure of the data (logs, metrics, events)?
- What domain does this data cover?
Discuss and clarify the agent's scope:
Store requirements information:
IMPORTANT: After completing Step 2, ask the user if they want to proceed to Step 3 before continuing.
Goal: Identify which Cortex Search Services, Semantic Views, and/or Stored Procedures to include in the agent or create them as needed.
Actions:
Ask the user if they want to use existing tools or create new tools:
If the user chooses "Use existing tools":
a. Ask for tools location:
Where are your semantic views and search services located?
- Tools Database: [e.g., DATA_DB]
- Tools Schema: [e.g., ANALYTICS]
b. Query available Cortex Search Services:
SHOW CORTEX SEARCH SERVICES IN SCHEMA <TOOLS_DATABASE>.<TOOLS_SCHEMA>;
c. Query available Semantic Views:
SHOW SEMANTIC VIEWS IN SCHEMA <TOOLS_DATABASE>.<TOOLS_SCHEMA>;
d. Query available Stored Procedures (if the user wants custom tools):
SHOW PROCEDURES IN SCHEMA <TOOLS_DATABASE>.<TOOLS_SCHEMA>;
e. Present available tools to the user:
f. Ask the user to select which tool(s) to include
g. For selected tools, save their fully qualified names and configuration details to the workspace metadata
If the user chooses "Create new tools":
TOOL_CREATION.md in this directory for detailed instructions on creating:
If the user needs tools that don't exist after viewing existing tools:
TOOL_CREATION.md in this directory for detailed instructions on creating:
IMPORTANT: After completing Step 3 and getting user confirmation, ask the user if they want to proceed to Step 4 before continuing.
Goal: Create the agent in Snowflake with placeholder tool descriptions
Note: This step uses the database, schema, and role provided in Step 1. If the user hasn't specified a connection name, ask for it now (default: snowhouse). If you need to verify access, you can check:
SHOW GRANTS ON SCHEMA <DATABASE>.<SCHEMA>;
-- Ensure the role has CREATE AGENT privilege
Actions:
Build agent specification JSON with placeholder content:
{
"models": {
"orchestration": "auto"
},
"orchestration": {
"budget": {
"seconds": 900,
"tokens": 400000
}
},
"instructions": {
"orchestration": "<optional_orchestration_instructions>",
"response": "<optional_response_instructions>"
},
"tools": [
{
"tool_spec": {
"type": "cortex_analyst_text_to_sql",
"name": "<tool_name>",
"description": "Query data from <VIEW_NAME>"
}
}
],
"tool_resources": {
"<tool_name>": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"semantic_view": "<FULLY_QUALIFIED_VIEW_NAME>"
}
}
}
CRITICAL FORMAT NOTE: The tool_resources must be a separate top-level object in the agent spec, not nested inside each tool. Each tool is referenced by its name in the tool_resources object.
Save the specification to the workspace using workspace_write.py:
cat << 'EOF' | uv run python ../scripts/workspace_write.py --fqn <DATABASE>.<SCHEMA>.<AGENT_NAME> --file agent_spec.json --stdin
<PASTE THE SPEC JSON HERE>
EOF
# Example:
cat << 'EOF' | uv run python ../scripts/workspace_write.py --fqn MY_DATABASE.AGENTS.MY_SALES_AGENT --file agent_spec.json --stdin
{"models": {"orchestration": "auto"}, "tools": [...], "tool_resources": {...}}
EOF
Prepare and validate the spec, then create the agent via SQL:
uv run python ../scripts/prepare_agent_spec.py --fqn <DATABASE>.<SCHEMA>.<AGENT_NAME>
This validates the spec and prints it to stdout. Use the output to execute the CREATE statement via sql_execute:
CREATE OR REPLACE AGENT <DATABASE>.<SCHEMA>.<AGENT_NAME>
FROM SPECIFICATION $spec$
<PASTE THE VALIDATED SPEC JSON FROM prepare_agent_spec.py OUTPUT>
$spec$;
Example:
CREATE OR REPLACE AGENT MY_DATABASE.AGENTS.MY_SALES_AGENT
FROM SPECIFICATION $spec$
{
"models": {"orchestration": "auto"},
"tools": [{"tool_spec": {"type": "cortex_analyst_text_to_sql", "name": "query_sales", "description": "Query sales data"}}],
"tool_resources": {"query_sales": {"execution_environment": {"type": "warehouse", "warehouse": ""}, "semantic_view": "MY_DATABASE.ANALYTICS.SALES_VIEW"}}
}
$spec$;
IMPORTANT: Use $spec$ as the dollar-quote delimiter (not $$) to avoid conflicts if the spec JSON contains $$.
Example Agent Spec with Semantic Views:
{
"models": {
"orchestration": "auto"
},
"tools": [
{
"tool_spec": {
"type": "cortex_analyst_text_to_sql",
"name": "query_sales",
"description": "Query data from SALES_SEMANTIC_VIEW"
}
},
{
"tool_spec": {
"type": "cortex_analyst_text_to_sql",
"name": "query_customers",
"description": "Query data from CUSTOMERS_VIEW"
}
}
],
"tool_resources": {
"query_sales": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"semantic_view": "<TOOLS_DATABASE>.<TOOLS_SCHEMA>.SALES_SEMANTIC_VIEW"
},
"query_customers": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"semantic_view": "<TOOLS_DATABASE>.<TOOLS_SCHEMA>.CUSTOMERS_VIEW"
}
}
}
Example Agent Spec with Cortex Search Service:
{
"models": {
"orchestration": "auto"
},
"tools": [
{
"tool_spec": {
"type": "cortex_search",
"name": "search_docs",
"description": "Search documentation"
}
}
],
"tool_resources": {
"search_docs": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"search_service": "ENG_CORTEXSEARCH.SNOWFLAKE_INTELLIGENCE.DOCS_SEARCH"
}
}
}
Example Agent Spec with Stored Procedure (Custom Tool):
{
"models": {
"orchestration": "auto"
},
"tools": [
{
"tool_spec": {
"type": "generic",
"name": "calculate_metrics",
"description": "Calculate business metrics for a given date range and metric type",
"input_schema": {
"type": "object",
"properties": {
"start_date": {
"type": "string",
"format": "date",
"description": "Start date for metric calculation"
},
"end_date": {
"type": "string",
"format": "date",
"description": "End date for metric calculation"
},
"metric_type": {
"type": "string",
"description": "Type of metric to calculate"
}
},
"required": ["start_date", "end_date", "metric_type"]
}
}
}
],
"tool_resources": {
"calculate_metrics": {
"type": "procedure",
"identifier": "MY_DATABASE.MY_SCHEMA.CALCULATE_METRICS",
"execution_environment": {
"type": "warehouse",
"warehouse": "MY_WAREHOUSE",
"query_timeout": 300
}
}
}
}
Example Agent Spec with Multiple Tool Types:
{
"models": {
"orchestration": "auto"
},
"tools": [
{
"tool_spec": {
"type": "cortex_analyst_text_to_sql",
"name": "query_sales_data",
"description": "Query sales data using natural language"
}
},
{
"tool_spec": {
"type": "cortex_search",
"name": "search_docs",
"description": "Search product documentation"
}
},
{
"tool_spec": {
"type": "generic",
"name": "calculate_metrics",
"description": "Calculate business metrics",
"input_schema": {
"type": "object",
"properties": {
"start_date": {
"type": "string",
"format": "date"
},
"end_date": {
"type": "string",
"format": "date"
}
},
"required": ["start_date", "end_date"]
}
}
}
],
"tool_resources": {
"query_sales_data": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"semantic_view": "MY_DATABASE.MY_SCHEMA.SALES_SEMANTIC_VIEW"
},
"search_docs": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"search_service": "MY_DATABASE.MY_SCHEMA.DOCS_SEARCH"
},
"calculate_metrics": {
"type": "procedure",
"identifier": "MY_DATABASE.MY_SCHEMA.CALCULATE_METRICS",
"execution_environment": {
"type": "warehouse",
"warehouse": "MY_WAREHOUSE",
"query_timeout": 300
}
}
}
}
Output:
Next: Proceed to Step 5 to verify the agent was created correctly
Goal: Confirm the agent was created successfully with all tools configured
Actions:
Verify agent exists:
SHOW AGENTS LIKE '<AGENT_NAME>' IN SCHEMA <DATABASE>.<SCHEMA>;
Test the agent with a simple query to verify it works:
uv run python ../scripts/test_agent.py --agent-name <AGENT_NAME> \
--question "What can you do?" \
--workspace <WORKSPACE_DIR> \
--output-name test_verification.json \
--database <DATABASE> \
--schema <SCHEMA> \
--connection <CONNECTION_NAME>
# Example (using values from Step 1):
uv run python ../scripts/test_agent.py --agent-name MY_SALES_AGENT \
--question "What can you do?" \
--workspace MY_DATABASE_AGENTS_MY_SALES_AGENT \
--output-name test_verification.json \
--database MY_DATABASE \
--schema AGENTS \
--connection snowhouse
Review the response to ensure:
Example Verification:
-- Check agent exists
SHOW AGENTS LIKE 'MY_SALES_AGENT' IN SCHEMA MY_DATABASE.AGENTS;
-- View full configuration
DESCRIBE AGENT MY_DATABASE.AGENTS.MY_SALES_AGENT;
-- Expected agent_spec:
-- {"models":{"orchestration":"auto"},"tools":[{"tool_spec":{"type":"cortex_analyst_text_to_sql","name":"query_sales","description":"Query data from SALES_SEMANTIC_VIEW"}},...]}
Verification Checklist:
Grant access to other users/roles (if needed):
Ask the user if other roles need access to the agent. If yes, read ACCESS_MANAGEMENT.md for GRANT/REVOKE instructions.
IMPORTANT: After completing Step 5, ask the user which optimization/testing option they would like to pursue (or if they want to skip this step).
Goal: Optimize and test the agent using various methods
Present the user with four options:
Option 1: Audit agent against best practices
best-practices skill to check the agent configuration against best practicesOption 2: Test agent with sample queries
adhoc-testing-and-dataset-curation-for-cortex-agent skillOption 3: Systematic optimization with dataset
optimize-cortex-agent skillOption 4: Optimize semantic views
User Prompt:
Your agent is now created with basic placeholder descriptions. How would you like to proceed?
1. Audit agent against best practices
2. Test agent with sample queries (adhoc testing)
3. Systematic optimization with dataset
4. Optimize semantic views
5. Skip optimization/testing for now
Please select an option (1-5):
If User Selects Option 1:
best-practices skillbest-practices skill to audit the agentIf User Selects Option 2:
adhoc-testing-and-dataset-curation-for-cortex-agent skillIf User Selects Option 3:
optimize-cortex-agent skillIf User Selects Option 4:
List the cortex analyst tools available in the agent.
Follow the workflow described below:
Show the pause message with agent context:
Great! Let's audit and optimize this semantic view for to your agent.
**⏸️ PAUSING AGENT CREATION**
- Agent: [AGENT_NAME] (workspace: [PATH])
- Status: Tool selection in progress
- Selected tools so far: [LIST]
- Current tool being optimized: [SEMANTIC_VIEW_NAME]
I'll now walk you through the semantic view optimization workflow:
1. Download the semantic view YAML
2. Audit for best practices (duplicates, inconsistencies, missing descriptions)
3. Optimize components (dimensions, metrics, relationships, VQRs)
4. Upload the improved YAML back to Snowflake
Ready to start the audit? (Yes/No)
If user says YES:
Load the semantic-view skills
Follow the audit workflow from audit/SKILL.md
Follow optimization workflow from optimization/SKILL.md
When semantic view optimization is complete, show summary:
✅ SEMANTIC VIEW OPTIMIZATION COMPLETE
Semantic View: [NAME]
Changes made:
- Added descriptions to 12 columns
- Fixed 3 relationship issues
- Optimized 5 VQRs
- Updated custom instructions
**▶️ RESUMING AGENT CREATION**
- Agent: [AGENT_NAME]
- Returning to: Step 6 (Agent and Tool Optimization)
- Optimized semantic view "[NAME]" is now ready
If user says NO:
After optimization, ask if they want to proceed with other optimization options or finish
If User Selects Option 5:
This workflow uses sql_execute to run CREATE OR REPLACE AGENT SQL directly:
workspace_write.pyprepare_agent_spec.py (prints ready-to-use JSON to stdout)sql_execute with CREATE OR REPLACE AGENT ... FROM SPECIFICATION $spec$ ... $spec$$spec$ is used instead of $$ to avoid conflictsThe agent specification JSON follows this structure:
{
"models": {
"orchestration": "auto"
},
"orchestration": {
"budget": {
"seconds": 900,
"tokens": 400000
}
},
"instructions": {
"orchestration": "<optional_orchestration_instructions>",
"response": "<optional_response_instructions>"
},
"tools": [
{
"tool_spec": {
"type": "cortex_analyst_text_to_sql" | "cortex_search" | "generic",
"name": "<tool_name>",
"description": "<tool_description>"
}
}
],
"tool_resources": {
"<tool_name>": {
"execution_environment": {
"query_timeout": 299,
"type": "warehouse",
"warehouse": ""
},
"semantic_view": "<fully_qualified_view_name>" |
"semantic_model_file": "@<database>.<schema>.<stage>/<model_file>.yaml" |
"search_service": "<fully_qualified_service_name>" |
"type": "procedure",
"identifier": "<fully_qualified_procedure_name>"
}
}
}
Key Structure Notes:
models: Object with orchestration field (not array)orchestration: Optional object with budget settings (seconds and tokens)instructions: Optional object with orchestration and response instruction stringstools: Array of tool specifications, each with a tool_spec object containing type, name, and descriptiontool_resources: Separate top-level object (not nested in tools) that maps tool names to their resources and execution environmentstool_spec.type: Use cortex_analyst_text_to_sql for semantic views/models, cortex_search for search services, generic for stored procedurestool_resources options:
semantic_view: Fully qualified semantic view name (e.g., "DATABASE.SCHEMA.VIEW_NAME")semantic_model_file: Stage path to semantic model YAML file (e.g., "@DATABASE.SCHEMA.STAGE/file.yaml")search_service: Fully qualified search service name (e.g., "DATABASE.SCHEMA.SERVICE_NAME")type: "procedure" and identifier: "DATABASE.SCHEMA.PROCEDURE_NAME"SQL is used for:
USE ROLE)SHOW SEMANTIC VIEWS, SHOW CORTEX SEARCH SERVICES)DESCRIBE AGENT, SHOW AGENTS)test_agent.py which uses SQL internally)Symptom: "insufficient privileges to operate on schema" or REST API 401/403 errors
Root Cause: The role specified in --role parameter lacks CREATE AGENT privileges.
Solution:
Verify the role has required privileges:
SHOW GRANTS ON SCHEMA <DATABASE>.<SCHEMA>;
-- Look for CREATE AGENT and USAGE grants for your role
If the role lacks privileges, grant them:
-- Grant CREATE AGENT privilege on the schema
GRANT CREATE AGENT ON SCHEMA <DATABASE>.<SCHEMA> TO ROLE <your_role>;
-- Grant USAGE on the database and schema
GRANT USAGE ON DATABASE <DATABASE> TO ROLE <your_role>;
GRANT USAGE ON SCHEMA <DATABASE>.<SCHEMA> TO ROLE <your_role>;
Contact your Snowflake admin if you need privileges granted
IMPORTANT: Always specify the --role parameter with a role that has CREATE AGENT privileges (as provided in Step 1).
Symptom: "semantic view / search service does not exist" error
Solution:
GRANT USAGE ON DATABASE <TOOLS_DATABASE> TO ROLE <your_role>;
GRANT USAGE ON SCHEMA <TOOLS_DATABASE>.<TOOLS_SCHEMA> TO ROLE <your_role>;
GRANT REFERENCES ON <TOOLS_DATABASE>.<TOOLS_SCHEMA>.<semantic_view> TO ROLE <your_role>;
Symptom: "insufficient privileges" when running CREATE OR REPLACE AGENT
Root Cause: The session's current role lacks CREATE AGENT privileges.
Solution:
Check current role and switch if needed:
SELECT CURRENT_ROLE();
USE ROLE <role_with_create_agent>;
Verify the role has CREATE AGENT privileges (see Permission Issues section above)
Validate the agent specification JSON is well-formed:
tool_spec wrapperorchestration fieldcortex_analyst_text_to_sql or cortex_searchEnsure the target database and schema exist
If issues persist, verify grants:
SHOW GRANTS ON SCHEMA <DATABASE>.<SCHEMA>;
sales_data_tool)cortex_analyst_text_to_sql - Structured data queries via semantic viewscortex_search - Unstructured/document searchdata_to_chart - Visualization generationcode_interpreter - Containerized sandbox for code executiongeneric - Custom UDFs/procedures (specify function or procedure in tool_resources)The code_interpreter tool enables a containerized sandbox environment where the agent can execute code (e.g., bash, Python). This requires a compute pool to be configured on the account and PrPr parameters to be enabled.
{
"tools": [
{
"tool_spec": {
"type": "code_interpreter",
"name": "code_interpreter"
}
}
],
"tool_resources": {
"code_interpreter": {
"enabled": "true"
}
}
}
The compute pool, database, schema, and other sandbox infrastructure settings are configured at the account level via GS parameters, not in the agent spec. Contact your account administrator to ensure the sandbox compute pool is provisioned.
See ADD_SKILLS.md for detailed instructions on adding server-side skills to agents.