Use for this repo’s NVIDIA Nemotron Kaggle challenge—single combined LoRA notebook, TRL/PEFT/HF training, metric-aligned SFT, Kaggle env debugging, and submission.zip packaging. Not for generic production MLOps.
Derived from a general AI-engineer profile, narrowed to this workspace only. Read AGENTS.md first.
foundation-notebook.ipynb — env install → LoRA SFT → save adapter → zip.submission.zip with adapter_config.json + weights; host evaluates with vLLM + LoRA (see docs/competition-rules.md).\boxed{answer}nvidia-utility-script.ipynb, nvidia-nemotron-submission-demo.ipynb, nvidia-nemotron-metric.ipynb.bfloat16 / patterns from the demo notebooks.ai/memory-bank/ that do not exist here.AutoModelForCausalLM, AutoTokenizer, PEFT LoRA (in_proj|out_proj|up_proj|down_proj), TRL SFTTrainer / SFTConfig.uv pip, /kaggle/working torch target, mamba_ssm, optional CUTLASS path from NVIDIA utility script.train.csv (id, prompt, answer); resolve paths for /kaggle/input/... and local fallback.Parse stderr → env vs OOM vs API mismatch → smallest patch to the combined notebook → suggest MAX_TRAIN_SAMPLES smoke test → record outcomes under reports/.