Agent-driven YOLO fine-tuning — annotate, train, export, deploy
Agent-driven custom model training powered by Aegis's Training Agent. Closes the annotation-to-deployment loop: take a COCO dataset from dataset-annotation, fine-tune a YOLO model, auto-export to the optimal format for your hardware, and optionally deploy it as your active detection skill.
dataset-annotation skillenv_config.pydataset-annotation model-training yolo-detection-2026
┌─────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Annotate │───────▶│ Fine-tune YOLO │───────▶│ Deploy custom │
│ Review │ COCO │ Auto-export │ .pt │ model as active │
│ Export │ JSON │ Validate mAP │ .engine│ detection skill │
└─────────────┘ └──────────────────┘ └──────────────────┘
▲ │
└────────────────────────────────────────────────────┘
Feedback loop: better detection → better annotation
{"event": "train", "dataset_path": "~/datasets/front_door_people/", "base_model": "yolo26n", "epochs": 50, "batch_size": 16}
{"event": "export", "model_path": "runs/train/best.pt", "formats": ["coreml", "tensorrt"]}
{"event": "validate", "model_path": "runs/train/best.pt", "dataset_path": "~/datasets/front_door_people/"}
{"event": "ready", "gpu": "mps", "base_models": ["yolo26n", "yolo26s", "yolo26m", "yolo26l"]}
{"event": "progress", "epoch": 12, "total_epochs": 50, "loss": 0.043, "mAP50": 0.87, "mAP50_95": 0.72}
{"event": "training_complete", "model_path": "runs/train/best.pt", "metrics": {"mAP50": 0.91, "mAP50_95": 0.78, "params": "2.6M"}}
{"event": "export_complete", "format": "coreml", "path": "runs/train/best.mlpackage", "speedup": "2.1x vs PyTorch"}
{"event": "validation", "mAP50": 0.91, "per_class": [{"class": "person", "ap": 0.95}, {"class": "car", "ap": 0.88}]}
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt