Construction project lead generation for GTM teams using OpenClaw. Collect project requirements, search for construction projects, filter results by criteria (location, size, budget, timeline), and schedule recurring searches. Use when GTM agents need to: (1) Define construction project requirements, (2) Search for projects using OpenClaw, (3) Filter and rank projects by match score, (4) Set up scheduled project searches for fresh opportunities.
Automate construction project discovery for GTM teams by collecting requirements, running OpenClaw-based project searches, filtering results, and scheduling recurring searches.
For OpenClaw API search:
For TDLR web scraping:
pip install -r requirements.txtplaywright install chromiumOption 1: Environment variable
export OPENCLAW_API_KEY="your-api-key-here"
Option 2: ClawBox Settings Go to ClawBox Settings → Environment Variables → Add:
OPENCLAW_API_KEYyour-api-key-hereThe search script will check for this environment variable and provide clear error messages if not configured.
For TDLR scraping (no API key needed): The TDLR scraper connects directly to the public Texas Department of Licensing and Regulation (TDLR) website and requires no authentication.
Ask the user for project criteria and save to requirements.md:
Use references/requirements_template.md as a guide. The template includes examples and prompts for each field.
Example: Save requirements
cat > requirements.md << 'EOF'
# Construction Project Requirements
## Target Criteria
### Start Date
After March 2026
### Project Location
Texas (statewide)
### Type of Work
Commercial office buildings, Industrial warehouses, Mixed-use developments
### Square Footage
100,000+ sq ft
### Estimated Cost
$5M - $50M
### Disqualifiers
Residential projects, Government contracts
## Output Preferences
### Top Projects
20
EOF
You have three options for searching projects:
Option A: TDLR Web Scraper (Texas public records, no API required)
Scrape real construction projects from the Texas Department of Licensing and Regulation:
# Basic scraping (first 10 pages)
python scripts/scrape_tdlr_enhanced.py projects.db
# More pages
python scripts/scrape_tdlr_enhanced.py projects.db --max-pages 50
# Filter by city
python scripts/scrape_tdlr_enhanced.py projects.db --city "Austin" --max-pages 20
# Filter by county
python scripts/scrape_tdlr_enhanced.py projects.db --county "Travis"
# Get detailed project info (slower, clicks into each project)
python scripts/scrape_tdlr_enhanced.py projects.db --detailed --max-pages 5
# Custom start date filter
python scripts/scrape_tdlr_enhanced.py projects.db --start-date "2026-03-01"
The scraper:
After scraping, query the database:
# Query with your requirements
python scripts/query_db.py requirements.md projects.db > raw_results.json
# Then filter and rank as usual
python scripts/filter_projects.py raw_results.json requirements.md > qualified_projects.json
This is the recommended approach for Texas construction projects - it provides real, current data from public records.
Option B: OpenClaw API Search (if you have API access)
python scripts/search_projects.py > raw_results.json
The script connects to OpenClaw API using OPENCLAW_API_KEY and searches for construction projects. Returns JSON with project details: name, location, size, cost, timeline, owner, architect, GC, stage.
Current implementation includes placeholder sample data. Replace the search_projects() function with actual OpenClaw API integration:
import openclaw_client
client = openclaw_client.Client(api_key=os.getenv("OPENCLAW_API_KEY"))
results = client.search_construction_projects(
location="Texas",
start_date_after="2026-03-01",
min_square_footage=100000,
project_types=["commercial", "industrial", "mixed-use"]
)
return results.to_dict()
Option B: OpenClaw API Search (if you have API access)
python scripts/search_projects.py > raw_results.json
The script connects to OpenClaw API using OPENCLAW_API_KEY and searches for construction projects. Returns JSON with project details: name, location, size, cost, timeline, owner, architect, GC, stage.
Current implementation includes placeholder sample data. Replace the search_projects() function with actual OpenClaw API integration:
import openclaw_client
client = openclaw_client.Client(api_key=os.getenv("OPENCLAW_API_KEY"))
results = client.search_construction_projects(
location="Texas",
start_date_after="2026-03-01",
min_square_footage=100000,
project_types=["commercial", "industrial", "mixed-use"]
)
return results.to_dict()
Option C: Local SQLite Database Query (for testing or custom data)
python scripts/query_db.py requirements.md [path/to/projects.db] > raw_results.json
This script queries a local SQLite database with filtering applied directly at the database level. Ideal for:
Create a test database with sample projects:
python scripts/create_test_db.py # Creates projects.db with 10 sample TX projects
The database query script automatically filters by your requirements criteria:
Database schema:
CREATE TABLE projects (
id INTEGER PRIMARY KEY,
project_name TEXT NOT NULL,
location TEXT NOT NULL,
county TEXT,
project_type TEXT NOT NULL,
square_footage INTEGER,
estimated_cost TEXT,
start_date TEXT,
completion_date TEXT,
owner TEXT,
architect TEXT,
general_contractor TEXT,
project_stage TEXT,
description TEXT
)
Filter raw results against requirements and score matches:
python scripts/filter_projects.py raw_results.json requirements.md > qualified_projects.json
The filter script:
requirements.md to extract criteriaExample output:
{
"qualified_projects": 4,
"projects": [
{
"project_name": "Austin Tech Campus Phase II",
"location": "Austin, TX",
"project_type": "Commercial Office",
"square_footage": 185000,
"estimated_cost": "$42M",
"start_date": "2026-05-01",
"project_stage": "Bidding",
"match_score": 90,
"match_reasons": [
"Location match: Texas",
"Timeline match",
"Size match: 185,000 sq ft",
"Type match: Commercial Office",
"Budget match: $42M",
"Stage: Bidding",
"GC not assigned (opportunity)"
]
}
]
}
Present the top qualified projects to the user with match scores and reasons.
Set up a cron job to run searches automatically:
For TDLR scraping + filtering:
# Example: Daily scrape new Texas projects and notify on matches
create_cron_job \
name:"Daily TDLR Construction Scraper" \
schedule:"0 9 * * *" \
prompt:"Run python scripts/scrape_tdlr_enhanced.py projects.db --max-pages 5, then query with requirements.md, filter results, and send top 10 qualified projects to Slack #gtm-leads"
For OpenClaw API:
# Example: Daily search at 9 AM
create_cron_job \
name:"Construction Project Search" \
schedule:"0 9 * * *" \
prompt:"Run the OpenClaw construction project search using requirements.md, filter results, and send the top 10 qualified projects to my Slack channel"
Common schedules:
0 9 * * * (9 AM daily)0 9 * * 1 (9 AM Mondays)6hEdit scripts/filter_projects.py in the score_project() function to customize match scoring:
# Increase location importance
if req_location.lower() in project_location.lower():
score += 30 # Changed from 25
# Add custom criteria
if project.get("leed_certified"):
score += 15
reasons.append("LEED certified")
Add new fields to references/requirements_template.md and update the parser in filter_projects.py:
# In load_requirements()
requirements = {
# ... existing fields ...
"leed_required": "LEED" in content,
"preferred_contractors": extract_list(content, r"Preferred GC[:\s]+(.+?)(?:\n|$)"),
}
qualified_projects.json files to track outreachUsing TDLR Scraper (Recommended for Texas projects):
# 1. Install dependencies (one-time setup)
pip install -r requirements.txt
playwright install chromium
# 2. Requirements already saved to requirements.md
# 3. Scrape live Texas projects
python scripts/scrape_tdlr_enhanced.py projects.db --city "Austin" --max-pages 20
# 4. Query database with requirements (filtering at DB level)
python scripts/query_db.py requirements.md projects.db > raw_results.json
# 5. Filter and rank with scoring
python scripts/filter_projects.py raw_results.json requirements.md > qualified_projects.json
# 6. Review top 5 projects
jq '.projects[:5] | .[] | {project_name, location, match_score, estimated_cost, start_date}' qualified_projects.json
# 7. Schedule daily scraping + filtering
create_cron_job \
name:"Daily TDLR Leads" \
schedule:"0 9 * * *" \
prompt:"Scrape TDLR projects, query with requirements, and send top 10 to Slack #gtm-leads"
Using OpenClaw API:
# 1. Requirements already saved to requirements.md
# 2. Search for projects via API
python scripts/search_projects.py > raw_results.json
# 3. Filter and rank
python scripts/filter_projects.py raw_results.json requirements.md > qualified_projects.json
# 4. Review top 5 projects
jq '.projects[:5] | .[] | {project_name, location, match_score, estimated_cost, start_date}' qualified_projects.json
# 5. Schedule daily searches
create_cron_job \
name:"Daily Construction Leads" \
schedule:"0 9 * * *" \
prompt:"Run OpenClaw construction search and send top 10 to Slack #gtm-leads"
Using SQLite Database:
# 1. Requirements already saved to requirements.md
# 2. Query local database (filtering at DB level)
python scripts/query_db.py requirements.md projects.db > raw_results.json
# 3. (Optional) Further filter and rank with scoring
python scripts/filter_projects.py raw_results.json requirements.md > qualified_projects.json
# 4. Review results
jq '.projects[] | {project_name, location, square_footage, start_date}' raw_results.json
Test the workflow using placeholder data:
# 1. Generate sample data (script already includes samples)
python scripts/search_projects.py > raw_results.json
# 2. Create test requirements
cat > requirements.md << 'EOF'
# Construction Project Requirements
## Target Criteria
### Start Date
After March 2026
### Project Location
Texas
### Type of Work
Commercial, Industrial
### Square Footage
100000
### Estimated Cost
No restrictions
## Output Preferences
### Top Projects
20
EOF
# 3. Test filtering
python scripts/filter_projects.py raw_results.json requirements.md
# Should output scored and filtered projects