Generate comprehensive unit tests and traceability documentation from either provided code or a user requirement / use case. Use when the user wants Codex to find the related implementation, trace the full code path, test all relevant functions and methods, document Normal/Abnormal/Boundary coverage, and create a real Excel workbook output.
docs/template/unit-test/ for layout guidance..xlsx file under docs/unit/ (no companion JSON inputs or extra sidecar files).by function, not by use case, unless the user provides a concrete consolidated matrix example/template and wants the workbook to match that layout.Created ByN), Abnormal (A), and Boundary (B) scenarios for each directly relevant function or method.Untested.Passed or Failed rows whenever the repository already proves the behavior.Passed or Failed without executed evidence that reasonably maps to that case; do not fabricate a best-case 100% pass..xlsx workbook by default.Cover, Function List, Test Report, and the function tabs in that same file unless the user explicitly asks for a stripped workbook.1 sheet = 1 use case.1 sheet = 1 function.processRescheduleRequest -> Process Reschedule Request.UTCID01, UTCID02, UTCID03, and so on.O to mark which conditions, confirms, and results apply to each UTCID.Function Code, Function Name, Created By, Executed By, Lines of code, Lack of test cases, and Test requirement.Function Code. If there is also a UC ID, keep that in the requirement text, feature context, or workbook grouping instead of replacing the function identity.Function Name.Executed By blank unless execution ownership is known.Lines of code.Lack of test cases to 0 when the suite is intended to be complete. If known gaps remain, use the number of missing cases.Test requirement as a concise function-specific description of what is verified.Passed, Failed, Untested, N, A, B, and Total Test Cases.Passed = 0, Failed = 0, and Untested = Total Test Cases.N, A, and B from the generated tests for that function or module.Condition, include required preconditions and one row per distinct input value or state used in the tests.Precondition rows in a detailed matrix style: one concrete setup fact per row, not one broad summary sentence.Authenticated client owns request "req-1" and Request "req-1" is currently "DRAFT" over vague text such as valid editable request exists.Request / Input Variable rows as the actual values passed into the function under test, not the route template.id = "audit-1", requestId = "req-1", page = 2, limit = 20, and reason = "Need reconsideration" over rows that repeat /audit-logs/:id or POST /leave/requests.request or body row.Condition, prefer only Precondition, Action, Request status or State, and Input Variable.Confirm, include only expected returns and exceptions.Exception rows with the concrete failure reason when the code enforces a business rule. Prefer Dispute "dispute-1" cannot be closed while hearing is pending or Email "[email protected]" already exists over generic labels such as Bad Request or Unprocessable Entity.ConflictException: Email "[email protected]" already exists.Log message, populate it with specific quoted messages. Prefer real asserted logger output; otherwise use deterministic executed-case messages that clearly describe the branch outcome.Side effect, Message, Log, Audit, Quota, Notification, or other collaborator/internal rows unless the user explicitly asks for them.Exception, not under Return.Return only for successful or direct returned values and primary state outputs. When a use case mainly changes status, values like status = PUBLIC_DRAFT are valid Return rows.Result, include Type(N : Normal, A : Abnormal, B : Boundary), Passed/Failed, Executed Date, and Defect ID.Passed or Failed based on that real result instead of leaving it Untested.Untested for Passed/Failed unless the tests were actually executed.MM/DD for Executed Date only when the date is known. Otherwise leave it blank.Defect ID blank unless the user provides one or explicitly asks for mock IDs..xlsx workbook as the primary output whenever the user asks for documentation, a matrix, Excel output, or a template like an .xlsx screenshot.scripts/generate_traceability_xlsx.ps1 to build the workbook from a rectangular row matrix serialized as JSON.docs/unit/fn-XX-<function-slug>-unit-test.xlsx (or a feature-scoped name when the request covers many functions).*-input.json artifacts:$json = @'
{ "sheetName": "FN-01 Traceability", "rows": [ ["..."], ["..."] ] }
'@
& .codex/skills/test-case-documentation-and-unit-test-handling/scripts/generate_traceability_xlsx.ps1 -InputJsonText $json -OutputPath <output.xlsx> -Overwrite
<workspace>/docs/unit/<function-code>-unit-test.xlsx when a docs/unit folder exists<workspace>/<function-code>-unit-test.xlsxCover, Function List, and Test Report in the same workbook by default.powershell -NoProfile -ExecutionPolicy Bypass -File ....Lines of code may be approximate but must not be blankLack of test cases should normally be 0 for a complete batchPassed, Failed, Untested, N/A/B, and Total Test Cases should prefer Excel formulas like COUNTIF, COUNTA, and SUM over hardcoded countsFunction Code or Function Name from Function List, preserve that formula style..xlsx workbook path (and nothing else) unless the user explicitly asks for unit test code, raw matrices, or helper artifacts..xlsx workbook under docs/unit/, and no helper files (JSON inputs, scratch notes) are left in docs/.supertest route execution unless the user explicitly asks to validate routing behavior.