Write and revise HelixDB Rust DSL stored queries from scratch. Use when the task is to add, update, or review a Helix query built with read_batch, write_batch, traversal builders, projections, indexes, BM25 text search, or vector search. Inspect local labels, edges, properties, and existing query patterns before inventing new code.
Write Helix Rust DSL queries in a way that is schema-aware, explicit, and easy for agents to reason about.
Use this skill when the task is to:
read_batch() and write_batch()Do not use this skill as the main guide for inline POST /v1/query payloads. Use the dynamic-query skill for that.
Before writing any query code:
If the local repo is thin on Helix examples, use this repository's canonical references in this order:
docs/dsl-cheatsheet.mdexamples/authoring-patterns.mdexamples/search-patterns.mddocs/source-canon.mdUse:
read_batch() for read-only routeswrite_batch() for any mutationIf the query adds nodes, adds edges, updates properties, or deletes graph data, it is a write route.
Prefer this anchor order:
Do not start from a broad label scan when the application already has an indexed identifier like entityId, externalId, userId, tenantId, or a similar key.
Do not normalize names to your own preferred style.
If the application uses entityId, updatedAt, FOLLOWS, or RelatesTo, reuse those exact names.
Apply scope and status filters before broad traversal whenever possible.
Common examples:
tenantId or userIddeletedAtboth, out, or in_Use:
project(...) for stable service-facing response shapesvalue_map(...) when returning all or many properties is acceptableedge_properties() for edge streamsDo not return oversized properties like embeddings unless the caller explicitly needs them.
For BM25 and vector search:
Apply dedup, limit, range, skip, count, and first because the route needs them, not by habit.
repeat(...) is often used with a deliberate bounded depth. Do not assume arbitrary runtime repeat depth unless the local code already supports it.
When you need create-or-update behavior, follow this pattern:
var_as_ifread_batch()
.var_as(
"user",
g().n_with_label("User")
.where_(Predicate::eq_param("userId", "userId"))
.project(vec![
PropertyProjection::new("$id"),
PropertyProjection::new("userId"),
PropertyProjection::new("name"),
]),
)
.returning(["user"])
write_batch()
.var_as(
"existing",
g().n_with_label("User")
.where_(Predicate::eq_param("userId", "userId")),
)
.var_as_if(
"updated",
BatchCondition::VarNotEmpty("existing".to_string()),
g().n(NodeRef::var("existing"))
.set_property("name", PropertyInput::param("name")),
)
.var_as_if(
"created",
BatchCondition::VarEmpty("existing".to_string()),
g().add_n(
"User",
vec![
("userId", PropertyInput::param("userId")),
("name", PropertyInput::param("name")),
],
),
)
.returning(["updated", "created"])
read_batch()
.var_as(
"results",
g().vector_search_nodes_with(
"Document",
"embedding",
PropertyInput::param("queryVector"),
Expr::param("limit"),
Some(PropertyInput::param("tenantId")),
)
.project(vec![
PropertyProjection::new("$id"),
PropertyProjection::new("title"),
PropertyProjection::renamed("$distance", "distance"),
]),
)
.returning(["results"])
Do not:
dedup or limit without a reasonBefore finishing:
read_batch() versus write_batch() is correctFor shared references in this repo, see:
docs/source-canon.mddocs/dsl-cheatsheet.mdexamples/authoring-patterns.mdexamples/search-patterns.md