Self-improving AI system for distressed property lead generation. Monitors performance, spawns specialized skills to fix bottlenecks, runs A/B tests, and continuously optimizes lead conversion. Use when building or optimizing lead generation workflows, analyzing pipeline metrics, or creating automated lead intelligence systems.
This skill creates and manages a self-improving AI system for distressed property lead generation. It's not just a tool—it's an AI that has a will to get better.
Core Capability: Monitor every lead through the pipeline, identify bottlenecks, spawn specialized skills to fix problems, A/B test improvements, and keep what works.
Trigger this skill when the user:
1. MONITOR → Track every lead through pipeline
2. ANALYZE → Identify bottlenecks automatically
3. SPAWN → Create specialized skill to fix problem
4. TEST → Run A/B test with 50% of leads
5. DECIDE → Keep if it works, kill if it doesn't
6. REPEAT → Forever
If building from scratch:
# Check for existing lead tracking infrastructure
view /mnt/user-data/uploads/supabase/migrations
view /mnt/user-data/uploads/app/api
# Look for existing lead forms, CRM integrations, or tracking systems
view /mnt/user-data/uploads/components
Extract these details:
If improving existing system: Ask user:
Database Schema Required:
Create tables to track:
leads - Every incoming lead with source attributionlead_events - Every action on a lead (viewed, contacted, qualified, etc.)lead_experiments - A/B tests running on subsets of leadslead_metrics - Aggregated performance by source, agent, time periodlead_skills - Spawned skills and their performance impactKey Metrics to Monitor:
Read references/monitoring-framework.md for complete schema and tracking implementation.
Common Bottleneck Patterns:
Low Response Rate → Spawn "instant-responder" skill
Poor Qualification → Spawn "lead-scorer" skill
Slow Follow-Up → Spawn "follow-up-sequencer" skill
Weak Lead Sources → Spawn "source-optimizer" skill
Analysis Query Template:
-- Find conversion rates by pipeline stage
SELECT
stage,
COUNT(*) as lead_count,
COUNT(*) FILTER (WHERE converted = true) as converted_count,
(COUNT(*) FILTER (WHERE converted = true)::float / COUNT(*)) * 100 as conversion_rate
FROM lead_events
WHERE created_at > NOW() - INTERVAL '30 days'
GROUP BY stage
ORDER BY stage;
-- Identify slowest stage transitions
SELECT
from_stage,
to_stage,
AVG(time_in_stage) as avg_duration,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY time_in_stage) as median_duration
FROM lead_stage_transitions
GROUP BY from_stage, to_stage
ORDER BY avg_duration DESC;
When a bottleneck is identified, create a targeted skill to fix it.
Skill Spawning Protocol:
Define the problem clearly
Design the intervention
Create A/B test plan
Implement tracking
Example Spawned Skill: instant-responder
// Auto-respond to new leads within 60 seconds
export async function handleNewLead(lead: Lead) {
// Check if this lead is in the treatment group for instant-responder experiment
const experiment = await getActiveExperiment('instant-responder');
const isControl = lead.id % 2 === 0; // Simple 50/50 split
if (isControl) {
// Control group: existing process (manual agent follow-up)
await assignToAgent(lead);
return;
}
// Treatment group: instant AI response
const personalizedMessage = await generateResponse(lead);
await sendSMS(lead.phone, personalizedMessage);
await logEvent(lead.id, 'instant_response_sent', { experiment_id: experiment.id });
// Still assign to agent for human follow-up
await assignToAgent(lead);
}
Testing Framework:
Read references/ab-testing-protocol.md for detailed implementation, but core principles:
Decision Criteria:
interface ExperimentResults {
control: {
sample_size: number;
conversion_rate: number;
avg_time_to_convert: number;
};
treatment: {
sample_size: number;
conversion_rate: number;
avg_time_to_convert: number;
};
lift: number; // % improvement
p_value: number; // statistical significance
recommendation: 'KEEP' | 'KILL' | 'CONTINUE_TESTING';
}
Keep the skill if:
Kill the skill if:
Continue testing if:
The Continuous Improvement Cycle:
Priority Ranking System:
Always optimize in this order:
If no lead system exists:
Don't optimize what you haven't measured first.
If conversion rates vary wildly week-to-week:
Prioritize by potential impact:
Only test ONE skill at a time. Running multiple experiments simultaneously creates confounding variables.
Example: Instant-responder works great for Zillow leads but hurts conversion on referral leads.
Solution: Segment-specific deployment
lead_experiments tableComplete database schema for tracking:
Read this when: Setting up performance monitoring or designing tracking infrastructure.
Detailed guide to running statistically valid A/B tests:
Read this when: Launching a new experiment or analyzing test results.
Library of pre-built skills for common bottlenecks:
Read this when: Identifying a bottleneck and need a proven solution pattern.
User: "Only 40% of our leads are getting contacted within 24 hours. How do we fix this?"
Response:
[Analyzes current lead routing and agent workflow]
The bottleneck is agent response time. Let me spawn an instant-responder skill.
**The Skill:**
- Triggers within 60 seconds of lead capture
- Uses LLM to generate personalized SMS based on property address and lead source
- Still assigns to agent, but lead gets immediate acknowledgment
**A/B Test Design:**
- Control: existing process (agent contacts within 24h)
- Treatment: instant AI response + agent follow-up
- Metric: % of leads that respond to initial contact
- Sample size: 200 leads (100 per group)
- Duration: 2 weeks
**Hypothesis:** Instant response increases engagement rate from 40% to 65%+
[Implements skill, sets up tracking, launches experiment]
I'll check results in 2 weeks and decide: keep, kill, or continue testing.
User: "We're getting tons of leads but only 10% are qualified. We're wasting time on tire-kickers."
Response:
[Analyzes lead sources and qualification criteria]
The bottleneck is lead scoring. Most leads aren't pre-qualified before agent contact.
**Spawning lead-scorer skill:**
Analyzes each incoming lead for:
- Property characteristics (price, condition, equity position)
- Lead behavior (time on site, pages viewed, form urgency signals)
- Demographics (location, device type, referral source)
Assigns A/B/C/D score:
- A leads: high equity, motivated seller signals → immediate agent contact
- B leads: potential, needs nurturing → follow-up sequence
- C leads: low quality, may convert later → drip campaign
- D leads: spam or unqualified → archive
**A/B Test Design:**
- Control: all leads routed to agents equally
- Treatment: leads prioritized by score (A/B get fast response, C/D get automated nurture)
- Metric: % of agent time spent on qualified leads that convert
- Duration: 3 weeks
[Implements lead scoring model, sets up routing rules, launches test]
Use this skill when:
Don't use this skill for:
Built for Hodges & Fooshee Realty by AICA 🔥