Enforce ManulHeart's concurrency contract when editing runtime/, cdp/, worker/, or anything that spawns goroutines. Use when adding shared state, introducing a goroutine, modifying the Worker/Pool API, or touching CDP transport internals.
Established in 0.0.0.2, extended in 0.0.0.3 with RunHuntsInParallel
and per-worker log prefixes, extended in 0.0.0.5 with the configuration
system (pkg/config) and VS Code debug protocol (pkg/runtime/debug.go).
Every rule here has a test under -race; violations trip CI.
runtime.Runtime is single-goroutine. Fields are unguarded by design
(cachedElements, ScopedVariables.levels, stickyCheckboxStates).
Every parallel unit owns its own Runtime. See
pkg/runtime/runtime.go:41-58 for the
doc comment.
Parallelism goes through pkg/worker. Never spin up goroutines that
share a . Use:
Runtimeworker.NewWorker — owns a real Chrome + Page + Runtime.worker.AdoptWorker — wraps an existing browser.Page (tests / embed).worker.NewPool — bounded concurrency, jobs channel, first-error tracking.worker.RunHuntsInParallel(ctx, cfg, hunts, n, logger) — zero-config
convenience wrapper that creates a pool, runs hunts, and returns results
in input order. Use this for quick fan-out; use NewPool directly when
you need FailFast or custom ChromeOptions.Ports go through worker.PortAllocator. No hardcoded 9222, no
parallel-safe assumption without Acquire() / Release().
cdp.Conn is safe for concurrent use. Writes serialized by writeMu,
request IDs via atomic.Int64, Close() via sync.Once.
Subscriptions require Close(). c.Subscribe() returns *Subscription.
Always defer sub.Close(). The channel is closed by the publisher on
Conn.Close(), so receivers must handle the ok == false case.
Extension registries freeze at init. RegisterCustomControl /
RegisterGoCall must be called before pool.Run(...). Handlers
themselves must be concurrent-safe — every worker may invoke the same
handler simultaneously.
No new external dependencies. The README brags about exactly one
(gorilla/websocket). Implement errgroup-equivalent semantics inline
— see pkg/worker/pool.go for the template.
No time.Sleep in production code. Zero calls today; every wait
uses select { case <-ctx.Done(): ... case <-time.After(...): ... }.
Runtime — the checklistIf you add a new field to Runtime:
go test -race ./pkg/runtime/... still passes.pkg/worker/worker_test.go that exercises
the new state across ≥ 8 parallel adopted workers and asserts no
bleed.Every new go statement needs answers to:
context.Context?defer wg.Done() / defer close(ch) / defer cancel()?defer recover() if it runs arbitrary
user code?Example — the parent-ctx watchdog in cdp.Conn:
go func() {
select {
case <-ctx.Done(): // parent cancelled → tear down
_ = c.Close()
case <-connCtx.Done(): // we closed normally → exit
}
}()
var — the checklistresetRuntimeRegistries() is the pattern for
test cleanup.)Quick smell test:
defer c.Unsubscribe(ch) style → outdated; should be defer sub.Close().rt.vars.Something(...) from two goroutines → data race.for { ... time.Sleep(...) ... } → blocking; refactor to select on ctx.sync.Map → almost always the wrong choice here; prefer RWMutex + map
for the few genuinely shared structures we have.Derive a child logger for each worker using utils.WithPrefix. It shares the
parent's writer and level but prepends a [wN] tag to every line:
workerLog := utils.WithPrefix(parentLogger, fmt.Sprintf("[w%d] ", id))
All pkg/worker code routes through this prefix — do not construct a fresh
NewLogger per worker (that would split the output stream).
Conn, Subscription.Worker lifecycle.WorkerPool dispatch.PortAllocator.WithPrefix(parent, "[wN] ") for
per-worker log prefixes; NewLogger(logFile) for dual stdout+ANSI-stripped file output.