Set up a farm data pipeline for an external service using Firecrawl browser profiles, farmer-assisted login, site exploration, and generated vendor skills. Use when a farmer asks to connect a service like NoFence or to create a recurring data pipeline.
Use this skill when a farmer wants the agent to connect a web-based farm system and turn it into a recurring observation pipeline.
Examples:
Use Firecrawl CLI directly for the live setup. Do not bounce through bespoke setup tools for login handoff, exploration, or "continue" steps.
The only plugin tool this setup should need is save_data_pipeline once the agent already knows:
Then use run_data_pipeline for verification and list_data_pipelines for inspection.
DataPipeline record, not in the generated vendor skill.skills/pipeline-{vendor}/SKILL.md.farm_id.{vendor_slug}-{farm_id}.firecrawl scrape "https://example.com/login" --profile "{vendor_slug}-{farm_id}" --json
scrapeId.firecrawl interact --scrape-id "<scrape-id>" --prompt "Do not change the page. Just say ready for login handoff." --json
Important:
scrapeId.If skills/pipeline-{vendor}/SKILL.md already exists, use it as the starting point and only re-explore when the site appears to have changed.
Only collect data the farmer understands and wants.
Once the site is understood, call save_data_pipeline with the learned configuration.
The saved pipeline should include:
Keep the generated skill reusable across farms. Do not bake in farm IDs, raw account-specific secrets, or farmer-specific preferences that belong in the store.
run_data_pipeline before enabling the cron.Use this structure for skills/pipeline-{vendor}/SKILL.md:
---