Diagnose and fix issues with events, KPIs, custom data, or SDK integration. AUTO-INVOKE when user mentions: events not appearing, KPIs showing wrong values, KPIs showing strings instead of numbers, custom data missing, null KPIs, authentication errors, CLI not working, events not associated with agent, monitoring broken, SDK errors, or any Olakai-related problem. TRIGGER KEYWORDS: olakai, troubleshoot, debug, not working, events missing, KPI wrong, KPI null, KPI string, customData missing, authentication failed, CLI error, no events, events not appearing, diagnose, fix olakai, broken, SDK error, monitoring issue, API key invalid, events not tracked. DO NOT load for: initial setup (use olakai-new-project or olakai-integrate), or generating reports (use olakai-reports).
This skill helps diagnose and fix common issues with Olakai AI agent monitoring, KPI calculations, and SDK integration.
For full documentation, see: https://app.olakai.ai/llms.txt
Always diagnose by generating a real event and inspecting it. Don't guess - look at the actual data.
# 1. Trigger your agent/app to generate an event
# 2. Fetch the event
olakai activity list --agent-id YOUR_AGENT_ID --limit 1 --json
# 3. Inspect it completely
olakai activity get EVENT_ID --json | jq '{customData, kpiData}'
This reveals exactly what's happening:
Run these first to understand the current state:
# Check CLI authentication
olakai whoami
# List recent events (are any coming through?)
olakai activity list --limit 10
# List agents (is your agent registered?)
olakai agents list
# List custom data configs (are they set up?)
olakai custom-data list
# List KPIs for an agent
olakai kpis list --agent-id YOUR_AGENT_ID
# Check session/chat decoration status (for CHAT-scope KPIs)
olakai activity sessions --agent-id YOUR_AGENT_ID
Events from your agent aren't showing up in olakai activity list or the dashboard.
1. Verify API Key
# Check environment variable is set
echo $OLAKAI_API_KEY
# Should start with "sk_" and be ~40+ characters
# If empty or wrong, set it:
export OLAKAI_API_KEY="sk_your_key_here"
2. Check SDK Initialization
TypeScript - ensure init() is awaited:
const olakai = new OlakaiSDK({ apiKey: process.env.OLAKAI_API_KEY! });
await olakai.init(); // <-- Must await this!
Python - ensure config is called before instrumentation:
olakai_config(os.getenv("OLAKAI_API_KEY")) # <-- Must be first
instrument_openai() # <-- Then instrument
3. Enable Debug Mode
TypeScript:
const olakai = new OlakaiSDK({
apiKey: process.env.OLAKAI_API_KEY!,
debug: true, // <-- Add this
});
Python:
olakai_config(api_key, debug=True)
Look for error messages in console output.
4. Test Direct API Call
curl -X POST "https://app.olakai.ai/api/monitoring/prompt" \
-H "Content-Type: application/json" \
-H "x-api-key: $OLAKAI_API_KEY" \
-d '{
"prompt": "Test prompt",
"response": "Test response",
"app": "test-agent"
}'
Expected: 200 OK with event ID
If 401: Invalid API key
If 400: Check request format
5. Check Network/Firewall
Ensure your environment can reach https://app.olakai.ai
curl -I https://app.olakai.ai/api/health
| Problem | Solution |
|---|---|
| API key not set | Export OLAKAI_API_KEY environment variable |
| Wrong API key | Generate new key via CLI: olakai agents create --with-api-key |
init() not awaited | Add await before olakai.init() |
| SDK not wrapping client | Ensure you're using the returned wrapped client |
| Firewall blocking | Whitelist app.olakai.ai |
KPI values display as "VariableName" instead of numeric values like 42.
Example:
"kpiData": {
"Tools Discovered": "ToolsDiscovered", // Wrong: string
"New Tools Found": "NewToolsFound" // Wrong: string
}
Should be:
"kpiData": {
"Tools Discovered": 20, // Correct: number
"New Tools Found": 3 // Correct: number
}
KPI formulas are stored as raw strings instead of parsed AST objects.
1. Check KPI Formula Storage
olakai kpis list --agent-id YOUR_AGENT_ID --json | jq '.[] | {name, calculatorParams}'
Wrong (raw string):
{
"name": "Tools Discovered",
"calculatorParams": {
"formula": "ToolsDiscovered" // <-- String, not object
}
}
Correct (AST object):
{
"name": "Tools Discovered",
"calculatorParams": {
"formula": {
"type": "variable",
"name": "ToolsDiscovered"
}
}
}
2. Check CustomDataConfig Exists
olakai custom-data list --json | jq '.[].name'
Ensure every variable referenced in KPI formulas has a corresponding CustomDataConfig.
Option A: Update via CLI (Recommended)
The CLI now validates and parses formulas automatically:
# This will parse "ToolsDiscovered" into proper AST
olakai kpis update KPI_ID --formula "ToolsDiscovered"
Option B: Validate First, Then Check
# Validate the formula
olakai kpis validate --formula "ToolsDiscovered" --agent-id YOUR_AGENT_ID
# Should return:
# {
# "valid": true,
# "type": "number",
# "parsedFormula": { "type": "variable", "name": "ToolsDiscovered" }
# }
Option C: Recreate the KPI
# Delete the broken KPI
olakai kpis delete KPI_ID --force
# Create with proper formula (CLI now parses automatically)
olakai kpis create \
--name "Tools Discovered" \
--agent-id YOUR_AGENT_ID \
--calculator-id formula \
--formula "ToolsDiscovered" \
--aggregation SUM
After fixing, trigger a new event and check:
olakai activity list --agent-id YOUR_AGENT_ID --limit 1 --json | jq '.prompts[0].kpiData'
Should now show numeric values.
You're sending customData but it's not showing in event details.
1. Check Event Details
olakai activity get EVENT_ID --json | jq '.customData'
If null or missing fields, the SDK isn't sending them.
2. Verify SDK Code
TypeScript - customData in call options:
const response = await openai.chat.completions.create(
{ model: "gpt-4o", messages },
{
customData: {
myField: "value", // <-- Check this is present
myNumber: 42,
},
}
);
TypeScript - customData in manual event:
olakai.event({
prompt,
response,
customData: {
myField: "value",
myNumber: 42,
},
});
3. Check Field Names Match CustomDataConfig
Field names are case-sensitive. Ensure exact match:
# What you configured
olakai custom-data list --json | jq '.[].name'
# Output: "ToolsDiscovered", "NewToolsFound"
# What you're sending (must match exactly)
customData: {
ToolsDiscovered: 20, // Correct
toolsDiscovered: 20, // WRONG - case mismatch
tools_discovered: 20, // WRONG - different format
}
| Problem | Solution |
|---|---|
| Field name case mismatch | Match exact case from custom-data list --agent-id ID |
| customData not in options | Move to second argument of create() |
| Missing CustomDataConfig | Create with olakai custom-data create --agent-id ID --name X --type NUMBER |
| Sending wrong data type | NUMBER fields need numbers, STRING fields need strings |
You're sending fields in customData but they can't be used in KPI formulas - they don't appear as available variables.
The SDK accepts any JSON in customData, but only fields with CustomDataConfigs become KPI variables.
SDK customData → CustomDataConfig (Schema) → Context Variable → KPI Formula
↑
REQUIRED for field
to be usable in KPIs
Fields without CustomDataConfigs:
1. List what CustomDataConfigs exist:
olakai custom-data list --json | jq '.[].name'
2. Compare to what you're sending in SDK:
// If you're sending these fields:
customData: {
ItemsProcessed: 10, // Is there a CustomDataConfig for this?
SuccessRate: 0.95, // Is there a CustomDataConfig for this?
RandomField: "xyz", // Is there a CustomDataConfig for this?
}
3. Check for mismatches:
nullCreate CustomDataConfigs for every field you want to use in KPIs:
# For each field you need in KPIs, create a config (replace YOUR_AGENT_ID)
olakai custom-data create --agent-id YOUR_AGENT_ID --name "ItemsProcessed" --type NUMBER
olakai custom-data create --agent-id YOUR_AGENT_ID --name "SuccessRate" --type NUMBER
olakai custom-data create --agent-id YOUR_AGENT_ID --name "RandomField" --type STRING
# Verify for this agent
olakai custom-data list --agent-id YOUR_AGENT_ID
Best Practice: Design your CustomDataConfigs FIRST, then write SDK code that sends only those fields.
| Mistake | Problem | Fix |
|---|---|---|
| Sending extra "helpful" fields | They're ignored for KPIs | Only send registered fields |
| Different casing in SDK vs config | May cause issues | Match exact case |
| Creating KPI before CustomDataConfig | Formula can't resolve variable | Create config first |
You created KPIs for one agent and expected them to apply to a different agent. The second agent's events show no kpiData, or olakai kpis list --agent-id SECOND_AGENT_ID returns an empty list.
KPIs are unique per agent. Each KPI definition is bound to exactly one agent by its agentId. KPIs cannot be shared, inherited, or reused across agents — even within the same workflow or account.
| Concept | Scope | Shared Across Agents? |
|---|---|---|
| CustomDataConfig | Account-level | ✅ Yes — created once, available to all agents |
| KPI | Agent-level | ❌ No — belongs to one agent only |
# Check KPIs on the FIRST agent (where they were originally created)
olakai kpis list --agent-id AGENT_A_ID --json | jq '.[].name'
# Output: "Items Processed", "Success Rate"
# Check KPIs on the SECOND agent (where you expected them to work)
olakai kpis list --agent-id AGENT_B_ID --json | jq '.[].name'
# Output: (empty) ← KPIs don't carry over
Create the KPIs separately for the second agent:
# Recreate each KPI for the new agent
olakai kpis create \
--name "Items Processed" \
--agent-id AGENT_B_ID \
--calculator-id formula \
--formula "ItemsProcessed" \
--aggregation SUM
olakai kpis create \
--name "Success Rate" \
--agent-id AGENT_B_ID \
--calculator-id formula \
--formula "SuccessRate * 100" \
--aggregation AVERAGE
When creating a new agent, always:
olakai kpis list --agent-id NEW_AGENT_IDNote: CustomDataConfigs do NOT need to be recreated — they are account-level and shared. Only KPIs are agent-specific.
CustomDataConfigs exist for fields that are already tracked by the platform (sessionId, agentId, timestamps, etc.), cluttering the configuration.
The SDK accepts any JSON in customData, so agents sometimes send "helpful" extra data that's already tracked elsewhere.
| Field | How It's Tracked | Don't Create Config For |
|---|---|---|
| Session ID | SDK automatic grouping | sessionId, session |
| Agent ID | API key association | agentId, agent |
| User email | userEmail parameter | email, userEmail |
| Timestamps | Event metadata | timestamp, createdAt |
| Token count | tokens parameter | tokenCount, totalTokens |
| Model | Auto-detected from call | model, modelName |
| Provider | Wrapped client config | provider |
1. Identify redundant configs:
olakai custom-data list --json | jq '.[].name'
Look for names like: sessionId, agentId, timestamp, model, provider, tokenCount
2. Check if they're used in KPIs:
olakai kpis list --agent-id YOUR_AGENT_ID --json | jq '.[].calculatorParams.formula'
3. If not used in KPIs, remove from SDK code: Don't delete the configs (they may have historical data), but stop sending these fields.
4. Update SDK code to only send KPI-relevant fields:
customData: {
// ✅ Keep: Used in KPI formulas
ItemsProcessed: 10,
SuccessRate: 1.0,
// ❌ Remove: Already tracked by platform
// sessionId: session.id, // Already tracked
// agentId: agentConfig.id, // Already tracked
// timestamp: Date.now(), // Already tracked
}
Before creating a CustomDataConfig, ask:
null ValuesKPI values are null instead of numbers.
"kpiData": {
"Success Rate": null,
"Items Processed": null
}
1. Check if CustomData is Present
olakai activity get EVENT_ID --json | jq '.customData'
If the field is missing from customData, the KPI can't calculate.
2. Check CustomDataConfig Exists
olakai custom-data list --json | jq '.[].name'
Every field referenced in KPI formulas needs a config.
3. Check Formula Validity
olakai kpis validate --formula "YourVariable" --agent-id YOUR_AGENT_ID
If invalid, you'll see the error.
1. Ensure CustomDataConfig Exists
olakai custom-data create --agent-id YOUR_AGENT_ID --name "YourVariable" --type NUMBER
2. Ensure SDK Sends the Field
customData: {
YourVariable: 42, // Must be present and correct type
}
3. Trigger New Event
Old events won't recalculate. Generate a new event to verify the fix.
A classifier KPI (created from a template like sentiment_scorer or time_saved_estimator) shows null values or does not update after events are sent.
sessionId is not being passed, so there is no chat to decorate.1. Check the KPI scope
olakai kpis list --agent-id YOUR_AGENT_ID --json | jq '.[] | {name, scope, calculatorId}'
Classifier KPIs should have "scope": "CHAT" and "calculatorId": "classifier".
2. Verify events have a sessionId
olakai activity list --agent-id YOUR_AGENT_ID --limit 5 --json | jq '.prompts[] | {id, sessionId}'
If sessionId is null or different for each event, the platform cannot group them into a conversation for decoration.
3. Wait for chat decoration
Chat decoration (which triggers classifier evaluation) runs after a conversation is considered complete or after a processing delay. If you just sent events, wait a few minutes and check again.
4. Inspect session decoration status
olakai activity sessions --agent-id YOUR_AGENT_ID
This shows a summary of how many sessions (chats) have been decorated vs. still pending. If most sessions show NEW status, the decoration pipeline hasn't processed them yet. If sessions show DECORATION_FAILED, check the error column for details.
# Get full diagnostic details as JSON
olakai activity sessions --agent-id YOUR_AGENT_ID --json
1. Ensure SDK sends a consistent sessionId:
// All turns in the same conversation must share a sessionId
olakai.event({
prompt: userMessage,
response: aiResponse,
sessionId: conversationId, // Same ID for all turns in this conversation
userEmail: user.email,
});
2. Ensure the conversation has enough data:
Classifier KPIs analyze the full conversation context. A single-turn chat may not produce useful results for sentiment analysis. Send at least 2-3 turns before expecting a value.
3. Check that the template exists:
olakai kpis templates
Verify the template-id you used when creating the KPI is a valid template.
ROI on the dashboard shows a flat value (e.g., $10) for every prompt request, regardless of conversation complexity.
The agent's Time Saved metric slot KPI does not have a CHAT-scope classifier (time_saved_estimator). Without it, the slot falls back to a default time saved estimate instead of per-conversation AI classification, causing the Value Created slot and the ROI composite to show flat values.
1. Check if the classifier KPI exists:
olakai kpis list --agent-id YOUR_AGENT_ID --json | jq '.[] | select(.calculatorId == "classifier") | {name, scope, calculatorId}'
If empty, the classifier KPI is missing.
2. Check if the agent was created via CLI:
Agents created through olakai agents create (CLI/API) do not automatically get the classifier KPI. Only agents created through the dashboard UI auto-provision it.
Add the classifier KPI manually:
olakai kpis create --name "Time Saved" \
--calculator-id classifier --template-id time_saved_estimator \
--scope CHAT --agent-id YOUR_AGENT_ID
Verify it was created:
olakai kpis list --agent-id YOUR_AGENT_ID
After adding, new conversations will get per-conversation time saved estimates from the classifier, producing varied ROI values.
In Assistive IQ / Shadow AI dashboards, every application shows the same dollar value per interaction, regardless of the app (ChatGPT, Claude, Gemini, etc.).
The per-app time saved override (defaultTimeSavedMinutes on LanguageModel) is not configured. All apps fall back to the global default of 30 minutes.
Check if any per-app overrides are set in the admin UI:
Set per-app time saved values:
Per-app overrides always take precedence over the global default. Different apps may warrant different default time saved values (e.g., ChatGPT for quick questions: 10 min, Claude for deep analysis: 45 min).
Error: Authentication required
Error: Token expired
Error: Unauthorized
1. Re-authenticate
olakai logout
olakai login
2. Verify Authentication
olakai whoami
3. Check Credentials File
ls -la ~/.config/olakai/
cat ~/.config/olakai/credentials.json
Should contain accessToken and refreshToken.
4. Check Environment
# Default is production
olakai whoami
# For staging
OLAKAI_ENV=staging olakai whoami
Events appear in general activity but not under your agent.
The app field in events doesn't match the agent name.
# Check agent name
olakai agents list --json | jq '.[] | {id, name}'
# Check event app field
olakai activity get EVENT_ID --json | jq '.app'
Option A: Match App Name to Agent
In your SDK code, ensure the app name matches:
olakai.event({
prompt,
response,
app: "Your Agent Name", // Must match agent name exactly
});
Or with wrapped client:
const openai = olakai.wrap(new OpenAI({ apiKey }), {
provider: "openai",
defaultContext: {
app: "Your Agent Name",
},
});
Option B: Update Agent Name
olakai agents update AGENT_ID --name "App Name From Events"
LLM calls take much longer when monitoring is enabled.
Check if it's the SDK or the LLM:
const startSDK = Date.now();
const response = await openai.chat.completions.create(...);
console.log(`Total time: ${Date.now() - startSDK}ms`);
1. SDK Uses Fire-and-Forget
The SDK should NOT block your application. If it is:
// Ensure you're not awaiting event()
olakai.event(params); // No await - fire and forget
2. Check Network Latency
time curl -I https://app.olakai.ai/api/health
If >500ms, consider:
3. Disable Debug Mode in Production
const olakai = new OlakaiSDK({
apiKey: process.env.OLAKAI_API_KEY!,
debug: false, // Disable in production
});
Each LLM call creates multiple events.
Usually double-initialization or multiple wrapping.
Search your code for:
olakai.init() callsolakai.wrap() callsinstrument_openai() callsUse Singleton Pattern
// lib/olakai.ts
let instance: OlakaiSDK | null = null;
export async function getOlakai(): Promise<OlakaiSDK> {
if (!instance) {
instance = new OlakaiSDK({ apiKey: process.env.OLAKAI_API_KEY! });
await instance.init();
}
return instance;
}
# Authentication
olakai whoami # Check current user
olakai login # Re-authenticate
# Events/Activity
olakai activity list --limit N # Recent events
olakai activity list --agent-id ID # Events for specific agent
olakai activity get EVENT_ID --json # Full event details
# Agents
olakai agents list # All agents
olakai agents list --json | jq '.[].name' # Just names
# KPIs
olakai kpis list --agent-id ID # KPIs for agent
olakai kpis list --agent-id ID --json | jq '.[] | {name, calculatorParams}'
olakai kpis validate --formula "X" # Test formula
# Custom Data
olakai custom-data list # All configs
olakai custom-data list --json | jq '.[].name' # Just names
# Quick Health Check
olakai whoami && olakai activity list --limit 1 && echo "OK"
Events not appearing?
├── Check API key is set → export OLAKAI_API_KEY=sk_...
├── Check SDK initialized → await olakai.init()
├── Enable debug mode → debug: true
└── Test direct API → curl POST /api/monitoring/prompt
KPIs showing strings?
├── Check formula storage → olakai kpis list --json
├── Formula is raw string → olakai kpis update ID --formula "X"
└── Missing CustomDataConfig → olakai custom-data create --agent-id ID
KPIs showing null?
├── Check customData sent → olakai activity get ID --json
├── Field missing → Add to customData in SDK
├── CustomDataConfig missing → olakai custom-data create --agent-id ID
└── Type mismatch → NUMBER needs number, STRING needs string
customData field not usable in KPIs?
├── Check CustomDataConfig exists → olakai custom-data list --agent-id ID
├── Config missing → olakai custom-data create --agent-id ID --name "Field" --type NUMBER
└── Case mismatch → Ensure exact case match between SDK and config
Events not under agent?
├── Check app name matches → Compare event.app to agent.name
├── Mismatch → Update agent name or SDK app field
KPIs not appearing on new agent?
├── KPIs are agent-specific, NOT shared across agents
├── Check KPIs exist for THIS agent → olakai kpis list --agent-id THIS_AGENT_ID
├── If empty → Create KPIs for this agent (can't reuse from other agents)
└── CustomDataConfigs ARE shared → no need to recreate those
Classifier KPI showing null or not updating?
├── Check KPI scope is CHAT → olakai kpis list --agent-id ID --json
├── Check events have sessionId → olakai activity list --agent-id ID --json
├── sessionId missing → Add sessionId to SDK event calls
├── Too few turns → Send 2-3+ conversation turns before expecting a value
└── Just sent events → Wait for chat decoration (runs after delay/completion)
ROI shows same $ value for every prompt?
├── Check Time Saved slot KPI has classifier → olakai kpis list --agent-id ID --json
├── No classifier KPI → olakai kpis create --calculator-id classifier --template-id time_saved_estimator --scope CHAT --agent-id ID
├── Agent created via CLI → CLI may not auto-provision classifier for Time Saved slot, add manually
└── Classifier exists but still flat → Check sessionId grouping, wait for chat decoration
Shadow AI ROI shows same value for all apps?
├── Per-app override not set → Check LanguageModel.defaultTimeSavedMinutes in admin UI
├── All apps using global default (30 min) → Set per-app overrides in Shadow AI > Manage
└── Per-app override always wins over AA estimate
Only fields registered as CustomDataConfigs become available in KPI formulas:
SDK customData → CustomDataConfig → Context Variable → KPI Formula → kpiData
↓ ↓ ↓ ↓ ↓
Any JSON Schema definition Available var Expression Computed value
(REQUIRED)
Common pitfall: Sending extra fields in customData without CustomDataConfigs - they're stored but unusable for KPIs.