Tool modules reference for Symbiosis — make_tools() options, phase scopes, patterns.py high-level patterns, prompts.py helpers, pipeline YAML format, and all tool module exports. Use when building species handlers, working with LLM tool loops, or using any library/tools/ module.
All tool code lives in library/tools/. Tools are registered as OpenAI function schemas and dispatched through handle_tool().
make_tools(ctx, options: dict | None) -> list[dict] (library/tools/tools.py)
| Option key | Default | Adds tools |
|---|---|---|
messaging | True | send_message |
rooms | True | list_rooms |
introspect | True | introspect |
inter_instance | False | send_to_instance |
publish |
False |
publish |
graph | False | 7 graph tools (see below) |
activation_map | False | 8 map tools (see below) |
Always included: read_file, write_file, list_files, done.
write_file auto-compacts content exceeding compact.threshold_chars (calls ctx.compact_file()). Tool result includes "auto-compacted (N → M chars)" if triggered.
library/tools/phases.py — use get_tools_for_phase(phase, *, graph, activation_map) to get the right tool set for each phase.
from library.tools.phases import THINKING, COMPOSING, REVIEWING
from library.tools.phases import get_tools_for_phase
tools = get_tools_for_phase(THINKING, graph=True, activation_map=True)
| Phase constant | Tool scope |
|---|---|
THINKING | Read/list tools + append_thinking + archive_thoughts |
COMPOSING | Read/list tools only |
REVIEWING | Organize tools (read/write topics, archive) + read/list |
Graph and activation_map tools are available in all phases when enabled.
Organize tools (organize_*) are always included in REVIEWING; optionally in THINKING.
library/tools/patterns.py — import directly in species handlers.
gut = gut_response(ctx, events: list[Event], *, max_tokens=1024) -> dict
# Returns: {should_respond, urgency, brief, suggested_approach, rooms_to_respond}
plan = plan_response(ctx, events, gut_result, *, max_tokens=2048) -> dict
# Returns: {approach, key_points, tone, length, considerations}
response_text = compose_response(ctx, events, plan_result, *, max_tokens=4096) -> str
thinking_session(
ctx, initial_message: str, system: str,
*, max_tokens=4096, max_turns=10,
extra_tools: list | None = None,
) -> None
# Runs tool loop with append_thinking / replace_thinking / done tools.
# Writes to thinking.md. Auto-compacts thinking.md on append_thinking if over threshold.
run_organize_phase(
ctx, system: str,
*, extra_context="", graph=True, activation_map=True,
max_tokens=8192, label="species",
) -> None
# Runs knowledge organization tool loop. Uses REVIEWING phase tools.
# Builds knowledge structure summary and appends it to the initial message.
run_create_phase(
ctx, system: str,
*, extra_context="", graph=True, activation_map=True,
max_tokens=8192, label="species",
) -> None
# Creative output phase. Uses creative tools + read/list.
digest = distill_memory(ctx, *, exclude=None, include=None) -> str
# Recursive memory compression via LLM. Returns a digest string.
summary = distill_messages(ctx, messages: list[Event]) -> str
# Compress a list of events into a summary string.
run_subconscious(ctx, *, context="") -> str
# Generates and writes subconscious.md. Returns content.
run_react(ctx, *, context="") -> None
# Generates and writes intentions.md from current context.
update_relationships(ctx, sender: str, events: list[Event]) -> None
# Writes/updates relationships/<sender>.md.
text = llm_generate(ctx, system: str, message: str, *, max_tokens=2048, caller="?") -> str
# Simple one-shot LLM call, returns message text.
library/tools/prompts.py
memory = read_memory(ctx, paths: list[str] | None = None) -> str
# Reads and concatenates memory files into a formatted block.
# Default paths: thinking.md, intentions.md, subconscious.md, project.md
entity_id = get_entity_id(ctx) -> str
# Returns instance's messaging entity_id (or empty string).
formatted = format_events(events: list[Event], *, max_events=20) -> str
# Formats event list as readable message history.
block = format_relationships_block(ctx) -> str
# Reads relationships/ files and formats as context block.
block = format_memory_context(ctx, *, extra_files: list[str] | None = None) -> str
# Combines memory + relationships into a full context block.
block = format_intentions_block(ctx) -> str
block = format_subconscious_block(ctx) -> str
library/tools/pipeline.py — run_pipeline(ctx, pipeline_config, payload)
Pipelines are YAML files loaded by species at startup. Each step maps to a patterns.py function or built-in stage.