Prepare Blender-based flash/nonflash datasets for the ref_gaussian_wdh_flash project. Use when Codex needs to render missing frames from a .blend scene, split leading nonflash and trailing flash images, generate MVInverse albedo and normal maps, validate the prepared dataset, create a scene-aligned train shell whose model_path basename matches the prepared dataset basename, repair zero-byte flash precompute caches, or resume training safely around the 20000-iteration flash indirect precompute stage.
This skill standardizes the Blender flash dataset workflow used by ref_gaussian_wdh_flash.
Use it for five recurring jobs:
flash/nonflash/albedo/normal/segment/transforms before training.flash_indirect_from_iter.Trigger this skill when the user asks to do any of the following:
flash/nonflash + MVInverse format used by this repo.office, shelf, sofa, or similar flash datasets in this project.train_<scene>.sh and start training.ind_diff_precomp/*.exr.Do not use this skill for:
convert.py.convert.py unchanged; Blender workflows must stay separate.tools/blender_flash_dataset_workflow.py.ref_zjr Python for repo scripts, MVInverse inference, and training.model_path basename should match.
/home/wdh/Project/Data-Scaffold/office_clear pairs with ../output/office_clear.< 30G free as risky.Locate the repo root.
train.py and tools/blender_flash_dataset_workflow.py.REFGS_REPO_ROOT.Confirm dataset shape.
image/, segment/, transforms_train.json, and usually a .blend file.N frames are nonflash, last N frames are flash.Prepare the dataset.
nonflash/flash, run MVInverse, normalize points3d.ply, and emit workflow_report.json.Validate before training.
nonflash == Nflash == Nalbedo == totalnormal == totalsegment == totalpoints3d.ply contains x y z nx ny nz red green blueTrain or resume.
office_clearoffice_clear--load_iteration 19999 when flash_indirect_from_iter == 20000.Repair precompute failures.
ind_diff_precomp/*.exr is unreadable or zero-byte:
When using this skill, always report:
nonflash, flash, albedo, normal, segment.points3d.ply needed normalization.