Test generation for claudeHQ packages. Analyzes code, identifies patterns, generates Vitest tests following existing conventions. Use when: "generate tests for", "add tests", "improve coverage", "test X"
Generates comprehensive Vitest tests for claudeHQ packages. Follows existing test patterns, handles package-specific testing conventions, and verifies generated tests pass.
Load project context before generating tests.
Read project state files (skip any that don't exist):
.claude/shared/codebase-inventory.json — file map, module boundaries.claude/shared/test-patterns.md — mock examples and testing conventions (if it exists).claude/shared/registry.json — cross-skill stateIdentify test framework configuration:
vitest.config.ts or vite.config.ts at the workspace root and in each packageexpectTest file location conventions:
packages/agent/src/__tests__/*.test.tspackages/hub/src/__tests__/*.test.tspackages/dashboard/tests/*.test.ts (or packages/dashboard/tests/components/, tests/composables/, etc.)packages/shared/src/__tests__/*.test.tspackages/shared/src/__tests__/protocol.test.ts (or similar)Test categories per package:
| Package | Test Categories |
|---|---|
| Agent | PTY session lifecycle, queue management (enqueue/dequeue/priority/auto-advance), recording (JSONL write/chunk/upload), WS client (connect/reconnect/message handling), CLI argument parsing, config validation, health reporting |
| Hub | Fastify route handlers (using app.inject()), SQLite queries (using in-memory DB), WS relay logic, notification dispatch, recording file I/O, session state machine |
| Dashboard | Nuxt components (with @nuxt/test-utils or @vue/test-utils), Pinia store actions/getters, composables (useWebSocket, useTerminal, useReplay, useNotifications), utility functions |
| Shared | Zod schema validation (valid/invalid inputs, edge cases, error messages), protocol type guards, utility function correctness |
Understand the code that needs tests.
Identify the target: Parse the user's request to determine:
Read the target code thoroughly. For each file:
Check existing tests: Look for existing test files for the target. If they exist:
Determine test type:
Study existing tests in the same package to match conventions.
Find existing test files in the target package:
Glob: packages/<pkg>/src/__tests__/*.test.ts
Glob: packages/<pkg>/tests/**/*.test.ts
Extract patterns from existing tests:
describe/it/test nesting structurebeforeEach, afterEach, beforeAll, afterAll)vi.mock(), vi.fn(), vi.spyOn())expect().toBe(), expect().toEqual(), expect().toMatchObject())async/await, .resolves, .rejects)Package-specific mock patterns:
Agent mocks:
// node-pty mock
vi.mock('node-pty', () => ({
spawn: vi.fn(() => ({
onData: vi.fn(),
onExit: vi.fn(),
write: vi.fn(),
kill: vi.fn(),
pid: 12345,
})),
}));
// ws WebSocket mock
vi.mock('ws', () => ({
default: vi.fn(() => ({
on: vi.fn(),
send: vi.fn(),
close: vi.fn(),
readyState: 1,
})),
}));
Hub mocks:
// Fastify app.inject() for route testing (no mock needed, use real Fastify)
import Fastify from 'fastify';
const app = Fastify();
// Register routes...
const response = await app.inject({
method: 'GET',
url: '/api/sessions',
});
expect(response.statusCode).toBe(200);
// In-memory SQLite for DB tests
import Database from 'better-sqlite3';
const db = new Database(':memory:');
// Run migrations...
Dashboard mocks:
// Vue component mount
import { mount } from '@vue/test-utils';
import { createTestingPinia } from '@pinia/testing';
const wrapper = mount(Component, {
global: {
plugins: [createTestingPinia()],
},
});
// WebSocket composable mock
vi.mock('~/composables/useWebSocket', () => ({
useWebSocket: () => ({
send: vi.fn(),
status: ref('connected'),
data: ref(null),
}),
}));
// xterm.js mock
vi.mock('xterm', () => ({
Terminal: vi.fn(() => ({
open: vi.fn(),
write: vi.fn(),
dispose: vi.fn(),
onData: vi.fn(),
loadAddon: vi.fn(),
})),
}));
Shared mocks:
// Zod schema testing (no mocks needed, test validation directly)
import { sessionSchema } from '../types';
expect(() => sessionSchema.parse(validData)).not.toThrow();
expect(() => sessionSchema.parse(invalidData)).toThrow();
Write the test files.
Structure each test file:
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
// ... imports of code under test and mocks
describe('<ModuleName>', () => {
// Setup
beforeEach(() => { ... });
afterEach(() => { vi.restoreAllMocks(); });
describe('<functionName>', () => {
it('should <expected behavior> when <condition>', () => {
// Arrange
// Act
// Assert
});
it('should throw <error> when <invalid condition>', () => {
// ...
});
it('should handle <edge case>', () => {
// ...
});
});
});
Test coverage priorities (in order):
Naming convention: Test descriptions should read as sentences:
it('should start a PTY session with the given prompt and cwd')it('should reject sessions when max concurrent limit is reached')it('should reconnect WebSocket with exponential backoff after disconnect')Test data: Use realistic but deterministic test data:
'test-session-001''test-machine'Date.now()Assertion specificity:
toEqual over toBe for objectstoMatchObject for partial matchingtoHaveBeenCalledWith for mock verificationVerify generated tests pass.
Run the new tests:
npx vitest run <test-file-path>
If tests fail:
Run the full test suite to ensure new tests don't interfere with existing ones:
pnpm --filter @chq/<pkg> test
Check for flakiness: If any test uses timers, randomness, or async operations, verify it passes consistently. Use vi.useFakeTimers() for timer-dependent tests.
Present results to the user.
Tests generated:
describe blocks)it blocks)Coverage gaps identified (if any):
Bugs discovered (if any):
Recommendations:
After generating tests:
"Run
/dashboard-qato visually verify the components tested."
Update tracking files after test generation.
Log execution in .claude/shared/registry.json:
lastExecution: { "skill": "test-gen", "target": "<description>", "date": "<YYYY-MM-DD>", "testsGenerated": <count>, "status": "complete" }Register partial coverage as incomplete if some code paths couldn't be tested:
.claude/shared/incompletes.json:
{
"type": "test-coverage-gap",
"target": "<file-path>",
"reason": "<why it couldn't be tested>",
"suggestion": "<what's needed to test it>",
"date": "<YYYY-MM-DD>"
}
Update codebase inventory if new test files were created:
.claude/shared/codebase-inventory.json under the appropriate package.