Execute KQL management commands (table management, ingestion, policies, functions, materialized views) against Fabric Eventhouse and KQL Databases via CLI. Use when the user wants to: 1. Create or alter KQL tables, columns, or functions 2. Ingest data into an Eventhouse (inline, from storage, streaming) 3. Configure retention, caching, or partitioning policies 4. Create or manage materialized views and update policies 5. Manage data mappings for ingestion pipelines 6. Deploy KQL schema via scripts Triggers: "create kql table", "kql ingestion", "ingest into eventhouse", "kql function", "materialized view", "kql retention policy", "eventhouse schema", "kql authoring", "create eventhouse table", "kql mapping"
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
check-updatesskill.- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
| Task | Reference | Notes |
|---|---|---|
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for workspace/item ID resolution] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | Hierarchy, Finding Things in Fabric |
| Environment URLs | COMMON-CORE.md § Environment URLs | KQL Cluster URI, KQL Ingestion URI |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; KQL audience: kusto.kusto.windows.net |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | List Workspaces, List Items, Item Creation |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires storage.azure.com token, not Fabric token |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | az login flows and token acquisition |
Fabric Control-Plane API via az rest | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass --resource https://api.fabric.microsoft.com or az rest fails |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via curl | COMMON-CLI.md § OneLake Data Access via curl | Use curl not az rest (different token audience) |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | sqlcmd (Go) — not for KQL, but useful for cross-workload |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | az rest audience, shell escaping, token expiry |
Quick Reference: az rest Template | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which --resource + tool for each service |
| Authoring Capability Matrix | EVENTHOUSE-AUTHORING-CORE.md § Authoring Capability Matrix | Read first — KQL Database vs Shortcut (read-only); connection requires Admin/Ingestor role |
| Table Management and Schema Evolution | EVENTHOUSE-AUTHORING-CORE.md § Table Management and Schema Evolution | Create Table, Create-Merge (idempotent), Alter / Rename / Drop, Schema Evolution (Rename, Swap/Blue-Green) |
| Ingestion and Data Mappings | EVENTHOUSE-AUTHORING-CORE.md § Ingestion and Data Mappings | Inline, Set-or-Append/Replace, From Storage, Streaming, Data Mappings (CSV, JSON) |
| Policies | EVENTHOUSE-AUTHORING-CORE.md § Policies | Retention, Caching, Partitioning, Merge |
| Materialized Views | EVENTHOUSE-AUTHORING-CORE.md § Materialized Views | Create, Alter, Lifecycle, Supported aggregations |
| Stored Functions and Update Policies | EVENTHOUSE-AUTHORING-CORE.md § Stored Functions and Update Policies | Stored Functions, Update Policies (auto-transform on ingestion) |
| External Tables | EVENTHOUSE-AUTHORING-CORE.md § External Tables | OneLake / ADLS External Table, Query External Table |
| Permission Model | EVENTHOUSE-AUTHORING-CORE.md § Permission Model | Database Roles, Grant Permissions |
| Authoring Gotchas and Troubleshooting | EVENTHOUSE-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting Reference | 10 numbered issues with cause + fix |
| Bash Templates | authoring-script-templates.md § Bash Templates | Create Table + Ingest, Schema Deployment, Export Schema, Set Retention/Caching |
| PowerShell Templates | authoring-script-templates.md § PowerShell Templates | Create Table + Ingest, Schema Deployment |
| Tool Stack | SKILL.md § Tool Stack | |
| Connection | SKILL.md § Connection | |
| Authoring Scope | SKILL.md § Authoring Scope | |
| Execute KQL Command | SKILL.md § Execute KQL Command | az rest pattern — write JSON body, then execute |
| Table Management via CLI | SKILL.md § Table Management via CLI | Create Table, Add Column, Drop Table |
| Data Ingestion via CLI | SKILL.md § Data Ingestion via CLI | Inline, From Storage, From OneLake, Set-or-Append |
| Policies via CLI | SKILL.md § Policies via CLI | Retention, Caching, Streaming Ingestion |
| Materialized Views via CLI | SKILL.md § Materialized Views via CLI | |
| Functions and Update Policies via CLI | SKILL.md § Functions and Update Policies via CLI | Create Function, Create Update Policy |
| Schema Evolution via CLI | SKILL.md § Schema Evolution via CLI | Safe Schema Deployment Script, Export Current Schema |
| Monitoring Authoring Operations | SKILL.md § Monitoring Authoring Operations | |
| Must / Prefer / Avoid / Troubleshooting | SKILL.md § Must / Prefer / Avoid / Troubleshooting | MUST DO / AVOID / PREFER checklists |
| Agentic Workflows | SKILL.md § Agentic Workflows | Exploration Before Authoring, Script Generation Workflow |
| Examples | SKILL.md § Examples | |
| Agent Integration Notes | SKILL.md § Agent Integration Notes |
| Tool | Purpose | Install |
|---|---|---|
| az cli | KQL management commands via Kusto REST API; Fabric control-plane discovery | winget install Microsoft.AzureCLI |
| jq | JSON processing and output formatting | winget install jqlang.jq |
Same as eventhouse-consumption-cli. Authoring requires elevated roles:
# Discover KQL Database query URI
WS_ID="<workspace-id>"
az rest --method GET \
--url "https://api.fabric.microsoft.com/v1/workspaces/${WS_ID}/kqlDatabases" \
--resource "https://api.fabric.microsoft.com" \
| jq '.value[] | {name: .displayName, queryUri: .properties.queryServiceUri}'
# Set connection variables
CLUSTER_URI="https://<cluster>.kusto.fabric.microsoft.com"
DB_NAME="MyDatabase"
# Verify admin access
cat > /tmp/kql_body.json << EOF
{"db":"${DB_NAME}","csl":".show database ${DB_NAME} principals | where Role == 'Admin'"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
| Operation | Command Pattern |
|---|---|
| Create table | .create-merge table T (cols) |
| Add column | .alter-merge table T (NewCol: type) |
| Drop table | .drop table T ifexists |
| Ingest data | .ingest into table T (...) |
| Set retention | .alter table T policy retention ... |
| Set caching | .alter table T policy caching hot = Nd |
| Create function | .create-or-alter function F() { ... } |
| Create materialized view | .create materialized-view MV on table T { ... } |
| Create update policy | .alter table T policy update ... |
| Create data mapping | .create table T ingestion csv mapping ... |
All KQL management commands in this skill follow the same az rest pattern. After setting CLUSTER_URI and DB, write the JSON body to /tmp/kql_body.json and execute:
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"<KQL management command>"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
Nested JSON — For commands whose KQL contains embedded JSON (policies, mappings), use
<< 'EOF'(single-quoted) to prevent shell expansion of backslash-escaped quotes, and replace${DB}with the literal database name.
PowerShell equivalent —
@{db=$Database;csl=$Command} | ConvertTo-Json -Compress | Out-File $env:TEMP\kql_body.json -Encoding utf8NoBOMthen--body "@$env:TEMP\kql_body.json". See PowerShell Templates.
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic, Duration: real)"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter-merge table Events (Region: string)"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".drop table Events ifexists"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest inline into table Events <| 2025-01-15T10:00:00Z,Login,user1,{},0.5\n2025-01-15T10:01:00Z,Click,user2,{},0.2"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'https://mystorage.blob.core.windows.net/data/events.csv.gz;impersonate') with (format='csv', ingestionMappingReference='EventsCsvMapping', ignoreFirstRecord=true)"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".ingest into table Events (h'abfss://[email protected]/lakehouse.Lakehouse/Files/events.parquet;impersonate') with (format='parquet')"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".set-or-append CleanEvents <| RawEvents | where IsValid == true | project Timestamp, EventType, UserId"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Set 365-day retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Keep last 30 days in hot cache
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy caching hot = 30d"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table Events policy streamingingestion enable"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Create materialized view with backfill
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create materialized-view with (backfill=true) HourlyEventCounts on table Events { Events | summarize Count = count(), LastSeen = max(Timestamp) by EventType, bin(Timestamp, 1h) }"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Check health
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show materialized-view HourlyEventCounts statistics"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-or-alter function with (docstring='Parse raw events', folder='ETL') ParseRawEvents() { RawEvents | extend Parsed = parse_json(RawData) | project Timestamp = todatetime(Parsed.timestamp), EventType = tostring(Parsed.eventType), UserId = tostring(Parsed.userId) }"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table ParsedEvents policy update @'[{\"IsEnabled\":true,\"Source\":\"RawEvents\",\"Query\":\"ParseRawEvents()\",\"IsTransactional\":true}]'"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
Save management commands in a .kql file (one per line), then execute each command via az rest:
# deploy_schema.kql contains one command per line:
# .create-merge table Events (Timestamp: datetime, EventType: string, UserId: string, Properties: dynamic)
# .create-merge table ParsedEvents (Timestamp: datetime, EventType: string, UserId: string, PageName: string)
# .alter table Events policy retention '{\"SoftDeletePeriod\":\"365.00:00:00\",\"Recoverability\":\"Enabled\"}'
# .alter table Events policy caching hot = 30d
# Execute each command from the file (see "Execute KQL Command" section)
while IFS= read -r cmd; do
[[ "$cmd" =~ ^// ]] && continue # skip comment lines
[[ -z "$cmd" ]] && continue # skip blank lines
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":"${cmd}"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq '.Tables[0].Rows'
done < deploy_schema.kql
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".show database ${DB} schema as csl script"}
EOF
az rest --method POST \
--url "${CLUSTER_URI}/v1/rest/mgmt" \
--resource "https://kusto.kusto.windows.net" \
--headers "Content-Type=application/json" \
--body @/tmp/kql_body.json \
| jq -r '.Tables[0].Rows[][0]' > current_schema.kql
// Recent management commands
.show commands
| where StartedOn > ago(1h)
| project StartedOn, CommandType, Text = substring(Text, 0, 100), State, Duration
| order by StartedOn desc
// Ingestion failures
.show ingestion failures
| where FailedOn > ago(24h)
| summarize FailureCount = count() by ErrorCode, Table
| order by FailureCount desc
// Materialized view health
.show materialized-views
| project Name, IsEnabled, IsHealthy, MaterializedTo
.create-merge table, .create-or-alter function, .create table ifnotexists.Admin or Ingestor role.impersonate in storage URIs when ingesting from OneLake or Blob Storage.az rest with loop for deploying multi-command schema files..create-merge table over .create table for safe schema evolution..show database DB schema as csl script, store in git..drop table without ifexists — fails on missing tables..alter table to add columns — use .alter-merge table instead (additive only).| Symptom | Fix |
|---|---|
.create table fails "already exists" | Use .create-merge table or .create table ifnotexists |
| Ingestion succeeds but table empty | Check data mappings: .show table T ingestion csv mappings |
| Update policy not firing | Verify function runs standalone; check .show table T policy update |
Forbidden (403) on management commands | Request admin or ingestor database role |
| Materialized view stuck | Check .show materialized-view MV statistics; may need .disable/.enable |
| OneLake ingest auth error | Add ;impersonate to abfss:// URI |
Always check for explicit intent before doing anything:
Step 0 → Is the request specific? Does it name a table, operation, and/or schema?
→ NO → Ask: "What would you like to set up? Options: create tables,
configure policies, set up ingestion mappings, create materialized views."
STOP — do not proceed until user specifies.
→ YES → Continue to Step 1.
Step 1 → .show tables details // what exists?
Step 2 → .show table <TABLE> schema as json // current columns
Step 3 → .show table <TABLE> policy retention // current policies
Step 4 → Plan changes (create-merge, alter, etc.)
Step 5 → Execute changes
Step 6 → Verify: .show table <TABLE> schema as json // confirm changes
Step 1 → Understand requirements from user
Step 2 → Generate KQL management commands
Step 3 → Save to .kql file
Step 4 → Deploy via az rest (one command at a time)
Step 5 → Verify deployed state matches intent
# Create table
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".create-merge table SensorData (Timestamp: datetime, DeviceId: string, Temperature: real, Humidity: real, Location: dynamic)"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Set retention
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".alter table SensorData policy retention '{\"SoftDeletePeriod\":\"90.00:00:00\",\"Recoverability\":\"Enabled\"}'"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Set caching
cat > /tmp/kql_body.json << EOF
{"db":"${DB}","csl":".alter table SensorData policy caching hot = 7d"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
# Create JSON mapping
cat > /tmp/kql_body.json << 'EOF'
{"db":"MyDB","csl":".create table SensorData ingestion json mapping 'SensorJsonMapping' '[{\"column\":\"Timestamp\",\"path\":\"$.ts\",\"datatype\":\"datetime\"},{\"column\":\"DeviceId\",\"path\":\"$.deviceId\",\"datatype\":\"string\"},{\"column\":\"Temperature\",\"path\":\"$.temp\",\"datatype\":\"real\"},{\"column\":\"Humidity\",\"path\":\"$.humidity\",\"datatype\":\"real\"},{\"column\":\"Location\",\"path\":\"$.location\",\"datatype\":\"dynamic\"}]'"}
EOF
Execute
/tmp/kql_body.json— see Execute KQL Command
// 1. Target table
.create-merge table ParsedLogs (Timestamp: datetime, Level: string, Message: string, Source: string)
// 2. Transform function
.create-or-alter function ParseRawLogs() {
RawLogs
| extend J = parse_json(RawMessage)
| project
Timestamp = todatetime(J.timestamp),
Level = tostring(J.level),
Message = tostring(J.message),
Source = tostring(J.source)
}
// 3. Attach update policy
.alter table ParsedLogs policy update
@'[{"IsEnabled":true,"Source":"RawLogs","Query":"ParseRawLogs()","IsTransactional":true}]'
Admin or Ingestor).