Comprehensive toolset for interacting with LeadGenius Pro APIs. Use for managing B2B leads, clients, companies, enrichment settings, AI-driven lead processing (enrichment, copyright, SDR AI), search history, webhooks, territory analysis, email services, and integrations. Supports Cognito JWT (cookies + Bearer) and API key authentication with multi-tenant data isolation.
This skill provides a comprehensive interface for interacting with the LeadGenius Pro API v1.1.
Base URL:
https://last.leadgenius.app/apiFull Reference: See references/api_reference.md
LeadGenius supports three authentication methods, tried in order by getAuthContext:
| Priority | Method | How It Works | Use Case |
|---|---|---|---|
| 1 | Cognito Cookies | Automatic via browser session (next/headers cookies) |
| Web app (frontend) |
| 2 | Bearer JWT ✨ | Authorization: Bearer <accessToken> header; JWT sub claim extracted as owner | External scripts, agents, CLI tools |
| 3 | API Key | x-api-key: <key> + x-user-id: <sub> headers | Bulk data extraction, service-to-service |
# Option 1: Use the test/auth script
python3 scripts/auth.py --email [email protected]
# Option 2: Direct API call
curl -X POST https://last.leadgenius.app/api/auth \
-H "Content-Type: application/json" \
-d '{"username": "[email protected]", "password": "YourPassword"}'
Response:
{
"success": true,
"tokens": {
"accessToken": "eyJra...",
"idToken": "eyJra...",
"refreshToken": "eyJj...",
"expiresIn": 3600
}
}
Use tokens.accessToken as the Bearer token for all subsequent API calls. Tokens expire after 1 hour.
When your access token expires, use the refresh token to get a new one without re-authenticating:
curl -X POST https://last.leadgenius.app/api/auth/refresh \
-H "Content-Type: application/json" \
-d '{"refreshToken": "<your-refresh-token>"}'
Response:
{
"success": true,
"tokens": {
"accessToken": "eyJra...",
"idToken": "eyJra...",
"refreshToken": "eyJj...",
"expiresIn": 3600
}
}
Python Example:
import requests
import json
import time
def refresh_token(refresh_token):
response = requests.post(
"https://last.leadgenius.app/api/auth/refresh",
json={"refreshToken": refresh_token}
)
return response.json()
def get_valid_token(auth_file="~/.leadgenius_auth.json"):
with open(auth_file, 'r') as f:
auth = json.load(f)
# Check if token is expired (simple check)
# In production, decode JWT and check exp claim
try:
# Make a test request
response = requests.get(
"https://last.leadgenius.app/api/clients",
headers={"Authorization": f"Bearer {auth['token']}"}
)
if response.status_code == 401:
# Token expired, refresh it
new_tokens = refresh_token(auth['refresh_token'])
auth['token'] = new_tokens['tokens']['accessToken']
with open(auth_file, 'w') as f:
json.dump(auth, f)
return auth['token']
except:
return auth['token']
# All API calls use the accessToken in the Authorization header
curl -H "Authorization: Bearer <accessToken>" \
-H "Content-Type: application/json" \
https://last.leadgenius.app/api/clients
The getAuthContext middleware (src/utils/apiAuthHelper.ts) decodes the JWT, extracts the sub claim as the owner, and resolves the company_id for multi-tenant isolation — all automatically.
Tokens are saved to ~/.leadgenius_auth.json by the auth scripts:
{
"token": "<accessToken>",
"id_token": "<idToken>",
"refresh_token": "<refreshToken>",
"email": "[email protected]",
"user_id": "<uuid-user-id>",
"base_url": "https://last.leadgenius.app"
}
The following operations have been validated end-to-end with the test suite.
All client operations are scoped by company_id (resolved from JWT).
| Operation | Method | Endpoint | Status |
|---|---|---|---|
| List all clients | GET | /api/clients | ✅ Tested |
| Get single client | GET | /api/clients?clientId=<id> | ✅ Tested |
| Create client | POST | /api/clients | ✅ Tested (201) |
| Update client | PUT | /api/clients | ✅ Tested |
| Delete client | DELETE | /api/clients?id=<id> | ✅ Tested |
| Purge client + leads | DELETE | /api/clients?id=<id>&purge=true | ⚠️ See warning below |
{
"clientName": "Acme Corp",
"companyURL": "https://acme.com",
"description": "Enterprise client for B2B leads"
}
Response (201):
{
"success": true,
"client": {
"id": "edd5c738-...",
"client_id": "acme-corp",
"clientName": "Acme Corp",
"companyURL": "https://acme.com",
"description": "Enterprise client for B2B leads",
"owner": "4428a4f8-...",
"company_id": "company-177..."
}
}
⚠️ CRITICAL — Slug vs UUID ("Invisible Leads" Bug) The client record has TWO identifiers:
id— The internal DynamoDB UUID (e.g.edd5c738-...). Used only for update/delete of the client record itself.client_id— The human-readable slug (e.g.acme-corp,historic-leads). Used for all lead operations.ALWAYS use the slug (
client_id) when creating or querying leads. The UI queries leads by slug. If you mistakenly use the UUIDidas the lead'sclient_id, the leads will exist in the database but will be invisible in the UI.Verification:
GET /api/leads?client_id=<slug>&limit=1— if it returns leads, the UI will show them too.
{
"id": "<dynamodb-id>",
"clientName": "Updated Name",
"description": "Updated description"
}
⚠️ Purge Timeout Warning: The
purge=trueflag on client deletion will time out if the client has more than ~1,000 leads. For large datasets, delete leads first using a concurrent batch deletion script (batches of 50 IDs), then delete the client record.
Leads are stored as EnrichLeads in DynamoDB and are scoped by client_id and company_id.
| Operation | Method | Endpoint | Status |
|---|---|---|---|
| List leads | GET | /api/leads?client_id=<slug>&limit=100 | ✅ Tested |
| Create single lead | POST | /api/leads | ✅ Tested (201) |
| Create batch leads | POST | /api/leads (with leads array) | ⚠️ See warning below |
| Update single lead | PUT | /api/leads | ✅ Tested |
| Batch update leads | PUT | /api/leads (with leads array) | ✅ Tested |
| Delete single lead | DELETE | /api/leads?id=<id> | ✅ Tested |
| Batch delete leads | DELETE | /api/leads (with ids array body) | ✅ Tested |
🚨 CRITICAL — Batch POST May Not Persist: The batch endpoint (
POST /api/leadswith{"leads": [...]}) returns201 Createdand reports a correctcreatedcount, but leads may not be saved to the database. This was discovered during production HubSpot imports (Feb 2026). For reliable imports, always POST leads individually (single object payload). Single-lead POST is confirmed 100% reliable. See HUBSPOT_TO_LEADGENIUS.md Bug #2 for details.💡 Batch Size Recommendation: If using batch POST, use a batch size of 50 leads per request. For maximum reliability, prefer single-lead POST with a 150ms delay between requests (~400 leads/min).
{
"client_id": "acme-corp",
"firstName": "John",
"lastName": "Smith",
"email": "[email protected]",
"companyName": "Acme Corp",
"companyDomain": "acme.com",
"title": "VP Engineering",
"linkedinUrl": "https://linkedin.com/in/johnsmith"
}
⚠️ The
client_idhere is the slug (e.g.acme-corp), NOT the DynamoDB UUID. See the Slug vs UUID warning above.
{
"leads": [
{
"client_id": "acme-corp",
"firstName": "Jane",
"lastName": "Doe",
"email": "[email protected]",
"companyName": "BatchCorp",
"title": "CTO"
},
{
"client_id": "acme-corp",
"firstName": "Bob",
"lastName": "Wilson",
"email": "[email protected]",
"companyName": "BatchCorp",
"title": "VP Sales"
}
]
}
Batch Response (201):
{
"success": true,
"created": 2,
"skipped": []
}
{
"id": "<lead-dynamodb-id>",
"title": "Senior VP Engineering",
"notes": "Updated via API"
}
{
"leads": [
{ "id": "<id-1>", "notes": "Updated lead 1" },
{ "id": "<id-2>", "notes": "Updated lead 2" }
]
}
{
"ids": ["<id-1>", "<id-2>"]
}
client_id (required) — The slug from the Client record (not the DynamoDB id)limit — 1 to 1000 (default: 100)nextToken — Pagination tokenStructured AI fields (aiLeadScore, aiQualification, etc.) are persisted in the backend but may not be visible in the standard LeadGenius UI table view. To guarantee visibility and searchability, aggregate all analytical data into the notes field using Markdown formatting:
# Recommended: Aggregate AI fields into Notes for visibility
lead["notes"] = f"""
## 🎯 AI SCORE: {row['aiLeadScore']} ({row['leadScore']}/100)
### 🧐 JUSTIFICATION
{row['justification']}
### 💡 STRATEGIC RECOMMENDATIONS
{row['recommendations']}
### 📝 SDR SYNTHESIS
{row['sdr_synthesis']}
""".strip()
This ensures all critical data is immediately visible in the lead detail view and searchable across the UI.
High-volume extraction bypasses standard auth for raw GSI performance.
| Operation | Endpoint | Auth |
|---|---|---|
| Bulk EnrichLeads | GET /api/enrich-leads/list | API Key |
| Bulk SourceLeads | GET /api/source-leads/list | API Key |
companyId (required) — Multi-tenant isolationclientId (optional) — Filter by client. Use ALL/DEFAULT or omit for alllimit (optional) — Default 1000, max 5000nextToken (optional) — Pagination tokenfields (optional) — Comma-separated field projection (reduces payload)| Operation | Endpoint | Auth |
|---|---|---|
| Get Company | GET /api/company | JWT |
| Create Company | POST /api/company | JWT |
| Update Company | PUT /api/company | JWT (Owner/Admin) |
Track and manage lead search operations (e.g., Apify scraping runs).
| Operation | Endpoint | Auth |
|---|---|---|
| Create Search History | POST /api/search-history | JWT / API Key |
| List Search History | GET /api/search-history | JWT |
| Update Search History | PUT /api/search-history | JWT |
client_id, status (initiated/running/completed/failed), icpId, category, limit, cursor{
"searchName": "Tech Companies in SF",
"searchDescription": "Looking for SaaS companies",
"searchCriteria": { "industry": "Technology", "location": "San Francisco" },
"searchFilters": { "companySize": "50-200" },
"client_id": "client-123",
"clientName": "Acme Corp",
"icpId": "icp-456",
"icpName": "Enterprise SaaS",
"apifyActorId": "actor-789",
"category": "prospecting",
"tags": ["enterprise", "saas"]
}
Create and manage inbound webhooks for third-party integrations.
| Operation | Endpoint | Auth |
|---|---|---|
| List Webhooks | GET /api/webhook-workbench | JWT / API Key |
| Create Webhook | POST /api/webhook-workbench | JWT / API Key |
Supported Platforms: heyreach, woodpecker, lemlist, generic
{
"name": "My Webhook",
"platform": "heyreach",
"description": "Webhook for lead capture",
"platformConfig": "{\"field_mapping\": {}}",
"client_id": "client-123"
}
Returns: webhookUrl with embedded secret key for the external platform to call.
Aggregated company-level analytics for territory planning.
| Operation | Endpoint | Auth |
|---|---|---|
| List Companies | GET /api/territory-workbench/companies | JWT |
| Create/Update Company | POST /api/territory-workbench/companies | JWT |
client_id (required), sortBy, sortDirection, industry, minLeads, maxLeads, search, startDate, endDateSettings drive all Lead Processing endpoints. Configure settings first, then trigger processing.
All settings endpoints require JWT auth and are scoped by company_id.
| Operation | Endpoint |
|---|---|
| Get | GET /api/settings/url |
| Create | POST /api/settings/url |
| Update | PUT /api/settings/url |
Available URL/key pairs: companyUrl, emailFinder, enrichment1 through enrichment10 (each with _Apikey).
| Operation | Endpoint |
|---|---|
| Get | GET /api/settings/agent |
| Create | POST /api/settings/agent |
| Update | PUT /api/settings/agent |
Available fields: projectId, enrichment1AgentId through enrichment10AgentId.
| Operation | Endpoint |
|---|---|
| Get | GET /api/settings/sdr-ai |
| Create | POST /api/settings/sdr-ai |
| Update | PUT /api/settings/sdr-ai |
Available fields: projectId, plus <fieldName>AgentId for all SDR fields.
Settings-driven execution routes that trigger enrichment, copyright, and SDR AI processing on individual leads.
| Processing Type | Endpoint | Settings Source |
|---|---|---|
| Enrichment | POST /api/leads/process/enrich | URL Settings |
| Copyright AI | POST /api/leads/process/copyright | Agent Settings |
| SDR AI | POST /api/leads/process/sdr | SDR AI Settings |
{
"leadId": "enrich-lead-id",
"services": ["companyUrl", "enrichment1", "enrichment3"],
"overwrite": false
}
{
"leadId": "enrich-lead-id",
"processes": [1, 3, 5],
"overwrite": false
}
{
"leadId": "enrich-lead-id",
"fields": ["message1", "aiLeadScore", "aiQualification"],
"overwrite": false
}
{
"success": true,
"runIds": ["run-abc-123", "run-def-456"],
"batchTag": "enrich-process-1707654321-x8k2m1",
"triggered": ["companyUrl", "enrichment1"],
"skipped": ["enrichment3"],
"leadId": "enrich-lead-id"
}
POST /api/settings/url, POST /api/settings/agent, POST /api/settings/sdr-aiPOST /api/leads/process/enrich, .../copyright, .../sdrrunIds → GET /api/trigger-task-status?runId=...| Operation | Endpoint |
|---|---|
| Submit Task | POST /api/trigger |
| Check Status | GET /api/trigger-task-status?runId=... |
| List Recent Runs | GET /api/trigger-recent-runs?limit=20 |
| Operation | Endpoint |
|---|---|
| Validate Email | POST /api/email-validate |
| Verify Email (deep) | POST /api/email-verify |
| Operation | Endpoint |
|---|---|
| Start Apify Scrape | POST /api/start-lead-scrape-complete |
| Check Scrape Status | GET /api/lead-generation-status?runId=... |
| Epsimo AI Chat | POST /api/epsimo-chat |
| Unipile Accounts | GET /api/unipile-accounts |
All data is strictly isolated. Three isolation layers are enforced:
owner ID (JWT sub)company_id (resolved from CompanyMember table)client_id (the slug, not the UUID)| Field | Example | Used For |
|---|---|---|
Client id | edd5c738-a1b2-... (UUID) | Update/delete the client record itself |
Client client_id | acme-corp (slug) | ⭐ All lead operations — creating, listing, and querying leads |
Lead id | lead-a3f2b1c8-... (UUID) | Update/delete individual lead records |
owner | 4428a4f8-... (UUID) | Set automatically from JWT sub claim |
🚨 Never confuse Client
idwith Clientclient_id. Using the wrong one when creating leads causes the "Invisible Leads" bug — leads exist in the database but don't appear in the UI. See the Slug vs UUID warning above.
After getAuthContext resolves the owner, all data queries use generateClient<Schema>({ authMode: 'apiKey' }) — meaning the Amplify API key handles DynamoDB access while the JWT provides identity and isolation.
All errors follow a consistent format:
{
"success": false,
"error": "Error message",
"errorType": "authentication_error",
"details": "Additional error details",
"recommendation": "Suggested action"
}
| Code | Meaning | Common Causes | Recommended Action |
|---|---|---|---|
| 200 | Success | Request completed successfully | Continue processing |
| 201 | Created | Resource created successfully | Capture returned ID |
| 400 | Bad Request | Invalid payload, missing required fields | Validate request body |
| 401 | Unauthorized | Invalid/expired token, missing auth | Refresh or re-authenticate |
| 403 | Forbidden | Insufficient permissions, wrong company_id | Check user permissions |
| 404 | Not Found | Resource doesn't exist | Verify ID/slug is correct |
| 409 | Conflict | Duplicate resource, constraint violation | Check for existing records |
| 429 | Too Many Requests | Rate limit exceeded | Implement exponential backoff |
| 500 | Server Error | Backend issue, database timeout | Retry with exponential backoff |
| 503 | Service Unavailable | Temporary downtime | Wait and retry |
no_valid_credentials, token_expired, federated_jwt, no_valid_tokensinsufficient_permissions, owner_mismatchmissing_required_field, invalid_format, invalid_valuenot_found, already_exists, conflict| Tier | Limit |
|---|---|
| Standard | 100 requests/minute |
| Premium | 1,000 requests/minute |
When you hit rate limits (429 status), implement exponential backoff:
import time
import requests
from requests.exceptions import HTTPError
def make_request_with_retry(url, headers, method="GET", json_data=None, max_retries=5):
"""Make API request with automatic retry on rate limits and server errors."""
for attempt in range(max_retries):
try:
if method == "GET":
response = requests.get(url, headers=headers)
elif method == "POST":
response = requests.post(url, headers=headers, json=json_data)
elif method == "PUT":
response = requests.put(url, headers=headers, json=json_data)
elif method == "DELETE":
response = requests.delete(url, headers=headers, json=json_data)
response.raise_for_status()
return response.json()
except HTTPError as e:
if e.response.status_code == 429:
# Rate limited - exponential backoff
wait_time = min(60 * (2 ** attempt), 300) # Max 5 minutes
print(f"Rate limited. Waiting {wait_time}s (attempt {attempt + 1}/{max_retries})...")
time.sleep(wait_time)
elif e.response.status_code >= 500:
# Server error - retry with backoff
wait_time = min(10 * (2 ** attempt), 60) # Max 1 minute
print(f"Server error. Waiting {wait_time}s (attempt {attempt + 1}/{max_retries})...")
time.sleep(wait_time)
elif e.response.status_code == 401:
# Token expired - try to refresh
print("Token expired. Attempting refresh...")
# Implement token refresh here
raise
else:
# Other errors - don't retry
raise
except Exception as e:
print(f"Unexpected error: {e}")
raise
raise Exception(f"Max retries ({max_retries}) exceeded")
# Usage example
headers = {"Authorization": f"Bearer {access_token}"}
result = make_request_with_retry(
"https://last.leadgenius.app/api/clients",
headers=headers
)
limit + nextToken)limit default 1000, max 5000 + nextToken)| Script | Description |
|---|---|
scripts/test_api.py | E2E test suite — tests auth, client CRUD, lead CRUD with cleanup |
scripts/lgp.py | Unified CLI for all common operations |
scripts/import_csv.py | CSV import tool — batch import leads from CSV with rate limiting |
scripts/api_call.py | Low-level utility for custom raw API requests |
scripts/auth.py | Standalone auth utility |
python3 scripts/test_api.py \
--username [email protected] \
--password YourPassword \
--base-url https://last.leadgenius.app
Options:
--base-url — Override base URL (default: https://last.leadgenius.app)--skip-cleanup — Keep test data after runThe test creates a temporary client and leads, exercises all CRUD operations, and cleans up automatically.
lgp.py)# Auth
python3 scripts/lgp.py auth --email [email protected]
# Leads
python3 scripts/lgp.py leads list
python3 scripts/lgp.py leads find --full-name "Hugo Sanchez"
python3 scripts/lgp.py leads enrich --ids lead_1 lead_2
# Campaigns
python3 scripts/lgp.py campaigns list
python3 scripts/lgp.py campaigns create --name "Q3 Expansion"
# Pipeline analytics
python3 scripts/lgp.py pipeline --start 2026-01-01 --end 2026-02-08
# Maintenance
python3 scripts/lgp.py maintenance bugs list
python3 scripts/lgp.py maintenance bugs report --desc "Enrichment fails on LinkedIn URLs"
python3 scripts/lgp.py maintenance enhancements list
python3 scripts/lgp.py maintenance enhancements request --desc "Add support for Google Maps leads"
# API Key generation
python3 scripts/lgp.py generate-key --name "Production Agent" --desc "Key for main auto-agent"
# Admin (admin only)
python3 scripts/lgp.py admin companies
python3 scripts/lgp.py admin users
# 1. Authenticate and get JWT
python3 scripts/auth.py --email [email protected]
# 2. Run the full E2E test suite
python3 scripts/test_api.py --username [email protected] --password YourPassword
# 3. Or make individual API calls
python3 scripts/api_call.py GET /clients
python3 scripts/api_call.py GET /leads?client_id=acme-corp
python3 scripts/api_call.py POST /leads --data '{"client_id": "acme-corp", "firstName": "Test", "lastName": "User"}'
This is the recommended end-to-end procedure for creating a client workspace and importing leads in a single, robust run. Following these steps prevents data orphaning and ensures full visibility in the UI.
# JWT for administrative tasks (client creation, GraphQL)
python3 scripts/lgp.py auth --email [email protected]
# API Key for high-speed batch imports (generate if needed)
python3 scripts/lgp.py generate-key --name "Import-Key"
Verify ~/.leadgenius_auth.json contains both a token and an API key.
Use the REST API or GraphQL mutation. Capture the returned client_id (slug) from the response — this is what you'll use for all lead operations.
python3 scripts/api_call.py POST /clients \
--data '{"clientName": "Historic Leads", "description": "Legacy imported leads"}'
Use the slug (client_id) from Step 2 in every lead object.
⚠️ Use single-lead POST for reliability. Batch POST (
{"leads": [...]}) may return 201 but not persist leads. See HUBSPOT_TO_LEADGENIUS.md Bug #2.
# Recommended: Single-lead POST (100% reliable)
for lead in all_leads:
payload = {"client_id": "historic-leads", "firstName": "...", ...}
response = requests.post(f"{BASE_URL}/api/leads", json=payload, headers=headers)
time.sleep(0.15) # ~400 leads/min, safe rate
# Confirm leads are visible via the same slug the UI uses
python3 scripts/api_call.py GET "/leads?client_id=historic-leads&limit=1"
Use the provided CSV import script for automated batch import with best practices:
# Authenticate first
python3 scripts/auth.py --email [email protected]
# Import from CSV (auto-creates client)
python3 scripts/import_csv.py \
--csv leads.csv \
--client-name "My Client" \
--company-url "https://example.com"
# Dry run to test
python3 scripts/import_csv.py \
--csv leads.csv \
--client-name "My Client" \
--dry-run
CSV Format:
firstName,lastName,email,companyName,companyDomain,title,linkedinUrl,notes
John,Doe,[email protected],Acme Corp,acme.com,VP Sales,https://linkedin.com/in/johndoe,Demo lead
The script handles:
~/.leadgenius_auth.json has both token and API keyclient_id) captured from responseclient_id uses the slug, NOT the UUIDnotes for UI visibilityGET /api/leads?client_id=<slug>&limit=1 returns resultsFor importing contacts from HubSpot CRM into LeadGenius, a comprehensive battle-tested guide is available. This covers all the nuances discovered during production imports.
Full Guide: HUBSPOT_TO_LEADGENIUS.md
| HubSpot Source | LeadGenius Field | Notes |
|---|---|---|
contact.properties.firstname | firstName | Direct mapping |
contact.properties.lastname | lastName | Fallback to email-derived if empty |
contact.properties.email | email | Required — skip contacts without |
Associated Company .name | companyName | Via &associations=companies, NOT contact.company |
Associated Company .domain | companyUrl | Via company associations |
contact.properties.jobtitle | title | Omit if empty |
contact.properties.phone / .mobilephone | phoneNumber | ⚠️ NOT phone — wrong name causes 500 |
lifecyclestage, hs_lead_status, industry | notes | Append to notes for UI visibility |
{"leads": [...]} returns 201 but does NOT persist. Always POST one lead at a time.phoneNumber, NOT phone. Wrong name causes 500 Internal Server Error."-" (dash) for missing lastNames. A single dot (".") causes 500 errors.contact.properties.company is almost always empty. Fetch via &associations=companies and batch-read company details.client_id (slug), never the id (UUID).| # | Bug | Symptom | Fix |
|---|---|---|---|
| 1 | Invisible Leads | 201 but not in UI | Use client_id slug, not id UUID |
| 2 | Batch POST Non-Persistence | 201 + count but leads gone | POST leads individually |
| 3 | phone Field Name | 500 error | Use phoneNumber |
| 4 | Empty String Fields | 500 error | Omit empty fields entirely |
| 5 | HubSpot Company Field Empty | No companyName imported | Use associations API |
| 6 | Dot-Only LastName | 500 error | Use "-" as fallback |
| 7 | Unreliable Total Count | API returns page size as total | Paginate with lastKey or trust import counter |
~/.leadgenius_auth.json exists with valid token.env contains HUBSPOT_ACCESS_TOKENPOST /api/clients returned successfullyclient_id (slug), NOT id (UUID)&associations=companiescompany fieldphoneNumber (not phone), companyUrl, title""firstName/lastName{"leads": [...]} endpointsrc/utils/apiAuthHelper.ts for the 3-layer auth implementation