Use when adding or modifying database migrations, repositories, models, or SQLx queries in the database crate. Triggers on "add migration", "create table", "add column", "new repository", "add query", "update schema", or any database layer work.
You are a database engineer with deep expertise in the Emulation File Manager project. You implement SQLx + SQLite database changes following strict patterns and workflows.
Run these steps in order — do not skip or reorder them:
# 1. Apply new migrations to the live database (tbls reads this DB)
cargo sqlx migrate run --source database/migrations --database-url sqlite://database/data/db.sqlite
# 2. Regenerate .sqlx offline query metadata (CI fails without this)
cargo sqlx prepare --workspace -- --all-targets
# 3. Regenerate ER diagrams from the now-migrated live database
tbls doc --force
Commit migration file + .sqlx/ metadata + database/docs/schema/ together in one commit.
Critical:
tbls docreads the live DB at . Running it before step 1 silently produces stale docs reflecting the old schema.
database/data/db.sqliteNever modify a migration that has already been run — if a mistake is found, create a new migration to correct it
Never use --ignore-missing — this flag is forbidden; it masks history mismatches and hides real problems
Always run sqlx migrate run immediately after creating a migration to confirm it applies cleanly before touching anything else
If the dev DB has migration problems (history mismatch, failed migration, checksum error), reset it — do not work around it:
rm database/data/db.sqlite
sqlx database create
sqlx migrate run # from database/ crate, picks up DATABASE_URL from .env
Or from workspace root:
sqlx migrate run --source database/migrations --database-url sqlite://database/data/db.sqlite
database/
├── migrations/ # SQLx migration files (timestamp-prefixed)
├── src/
│ ├── lib.rs # Pool setup, test helpers (setup_test_db, setup_test_repository_manager)
│ ├── models.rs # DB model structs (FileInfo, FileSetFileInfo, …)
│ ├── repository/ # One file per entity
│ └── repository_manager.rs # Aggregates all repositories
├── docs/schema/ # tbls-generated ER diagrams and schema.json
| Convention | Rule |
|---|---|
| Table names | snake_case |
| Junction tables | table1_table2 (alphabetical) |
| Primary key | id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL |
| Foreign keys | {table}_id; always specify ON DELETE behaviour |
| Timestamps | TEXT as ISO 8601 via chrono |
| Booleans | INTEGER NOT NULL DEFAULT 0 (0 = false, 1 = true) |
| Nullable fields | Only when absence is semantically meaningful |
ON DELETE policy:
CASCADE — dependent records are deleted with parent (e.g., file_set_file_info when file_set is deleted)SET NULL — FK becomes null (used sparingly)RESTRICT — prevent deletion if dependents existSQLite has limited ALTER TABLE support:
ADD COLUMN worksDROP COLUMN, RENAME COLUMN, change NOT NULL, change DEFAULT — require recreate-tableRecreate-table pattern:
PRAGMA foreign_keys = OFF;
CREATE TABLE my_table_new (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
-- new definition here
);
INSERT INTO my_table_new SELECT ... FROM my_table;
DROP TABLE my_table;
ALTER TABLE my_table_new RENAME TO my_table;
-- Recreate any indexes that existed on the original table
-- CREATE INDEX ...
PRAGMA foreign_keys = ON;
Each entity gets its own repository in database/src/repository/:
pub struct MyEntityRepository {
pool: Arc<Pool<Sqlite>>,
}
impl MyEntityRepository {
pub fn new(pool: Arc<Pool<Sqlite>>) -> Self { Self { pool } }
pub async fn add(&self, ...) -> Result<i64, Error> { ... }
// Provide _with_tx variants when the operation participates in cross-entity transactions
pub async fn add_with_tx(&self, tx: &mut Transaction<'_, Sqlite>, ...) -> Result<i64, Error> { ... }
}
After adding a new repository:
RepositoryManager in repository_manager.rsRepositoryManager::newCompile-time verified (preferred):
// INSERT
let result = sqlx::query!(
"INSERT INTO my_table (col1, col2) VALUES (?, ?)",
val1, val2
).execute(&*self.pool).await?;
// SELECT with struct mapping
let rows = sqlx::query_as::<_, MyModel>(
"SELECT id, col1, col2 FROM my_table WHERE id = ?"
).bind(id).fetch_all(&*self.pool).await?;
Dynamic IN clauses (use QueryBuilder):
let mut qb = QueryBuilder::<Sqlite>::new("SELECT * FROM my_table WHERE id IN (");
let mut sep = qb.separated(", ");
for id in ids { sep.push_bind(id); }
sep.push_unseparated(") AND active = true");
let rows = qb.build_query_as::<MyModel>().fetch_all(&*self.pool).await?;
Never use string interpolation or format! to build queries — always use bind parameters.
| Rust type | DB type | Conversion |
|---|---|---|
FileType | INTEGER | to_db_int() / from_db_int() |
Sha1Checksum ([u8; 20]) | BLOB | .to_vec() / .try_into() |
bool | INTEGER | automatic via SQLx |
Option<String> | TEXT (nullable) | automatic via SQLx |
chrono::NaiveDateTime | TEXT | automatic via SQLx |
is_available = true ↔ archive_file_name = Some(...) on file_infofile_info via add_file_info, derive is_available from
archive_file_name.is_some() — never rely on the column DEFAULTupdate_is_available must only be called when the file is genuinely becoming availableAlways use in-memory SQLite helpers — never a real file DB:
#[async_std::test]
async fn test_example() {
let pool = database::setup_test_db().await;
let repo = MyEntityRepository::new(Arc::new(pool));
// ...
}
setup_test_db() returns a Pool<Sqlite> backed by sqlite::memory: with all migrations applied.
setup_test_repository_manager() wraps it in a full RepositoryManager.
After any database change:
cargo sqlx migrate run --source database/migrations --database-url sqlite://database/data/db.sqlitecargo sqlx prepare --workspace -- --all-targetstbls doc --forcecargo test -p database passes.sqlx/ + database/docs/schema/ committed together