Overview
InsForge database migrations are versioned SQL files stored in your repository and applied with@insforge/cli.
For most teams, the workflow is:
- Create a migration file locally.
- Write the SQL that changes your schema.
- Commit that file to git.
- Apply it to the linked InsForge backend.
system.custom_migrations. This is separate from system.migrations, which InsForge uses for its own internal platform upgrades.
How InsForge migrations work
The core workflow is simple:- Create a file in
migrations/withnpx @insforge/cli db migrations new .... - Write the SQL for the schema change you want.
- Commit that file to git.
- Apply it with
npx @insforge/cli db migrations up ....
Before you start
Install and link the CLI to the backend you want to manage:Avoid schema drift
If you choose to manage your database with migrations, all schema changes should go through migration files. Avoid changing schema through:- the dashboard schema tools
- ad hoc raw SQL commands
- Create a migration file.
- Put the schema change in that file.
- Commit it to git.
- Apply it with the CLI.
Create and apply your first migration
This walkthrough uses a simpleemployees table to show the InsForge CLI workflow.
1. Create a migration file
<timestamp> in YYYYMMDDHHmmss format.
2. Add the SQL
Add the SQL for your table:3. Apply the migration
Apply every pending local migration in order:--all applies just that file.
4. Confirm the remote history
Modify the schema with another migration
Create a second migration file:Applying one migration vs. applying many
InsForge gives you three apply modes:| Command | What it does |
|---|---|
npx @insforge/cli db migrations up <version-or-filename> | Apply exactly one migration file |
npx @insforge/cli db migrations up --to <version-or-filename> | Apply pending files up to and including a target |
npx @insforge/cli db migrations up --all | Apply every pending local migration |
up <target> is the most forgiving mode. It validates the target migration and still enforces ordering, but it does not require every unrelated file in migrations/ to be valid. up --to and up --all validate the whole directory first.Sync remote history into local files
If the backend already has applied migrations, you can materialize them into your repository:- ensures
migrations/exists - reads
GET /api/database/migrations - writes one local
.sqlfile per applied remote migration - skips existing exact filenames instead of overwriting them
fetch when:
- you are onboarding to an existing InsForge project
- your local
migrations/folder is missing files - you want a disk copy of the backend’s applied history
Team workflow
Once you adopt migrations, treat them as code:- Create migration files on your feature branch.
- Commit
migrations/*.sqlto git. - Review migration SQL just like application code.
- Apply migrations to staging or production from those committed files.
- If a migration has already been applied remotely, do not edit it in place. Create a new forward migration instead.
Git tracks the migration files in your repository. InsForge tracks what has been applied in
system.custom_migrations. Keep both in sync by committing every migration file and applying schema changes from migration files instead of making ad hoc remote edits.db migrations up executes immediately against the linked backend, shared environments need coordination. In practice, that usually means one person or one CI job applies migrations to production-like environments.
Rules and guardrails
The CLI intentionally keeps migration naming and ordering strict:- Files live in
migrations/. - Filenames must match
<14-digit-version>_<migration-name>.sql. - Migration names may use lowercase letters, numbers, and hyphens only.
db migrations newrefuses to continue if your local filenames are invalid or two files share the same version.- Empty migration files are rejected.
- Today there is no
downor rollback command in the CLI. Support for down SQL statements is planned for a future iteration.
- The API accepts numeric version strings up to 64 digits.
- The CLI standardizes on 14-digit UTC timestamps for local filenames.
Under the hood
When you rundb migrations up, the CLI talks to the backend migration API:
GET /api/database/migrationsreturns the applied historyPOST /api/database/migrationsexecutes a new migration
- Splits your SQL into statements for history tracking.
- Rejects migration files that try to manage their own transactions with
BEGIN,COMMIT, orROLLBACK. - Starts a transaction and acquires an advisory lock on
system.custom_migrations. - Sets
search_pathtopublicfor the transaction. - Executes the SQL.
- Inserts a row into
system.custom_migrationsonly if the SQL succeeds. - Notifies PostgREST to reload schema metadata.
Because the backend sets
search_path to public, unqualified objects such as create table posts (...) resolve to public.posts by default. Using public. explicitly in migration SQL is still a good habit when you want the target schema to be obvious in code review.Troubleshooting
”Migration version must be newer than the latest applied migration”
Your local file is older than the remote head, or you reused a version. Run:db migrations new ... and move your SQL into the new versioned file.
”Migration is already applied remotely”
Do not try to re-run it. Either:- keep the local file because it matches committed history, or
- delete the stale duplicate if it should not exist locally
”Migration … is not the next pending local migration”
An earlier local migration must be applied first. Either:- apply the earlier file, or
- fix/delete the stale file if it should no longer exist