Governance and orchestration for recomputable warehouse datasets.
You build models that produce datasets — and those datasets depend on each other. When external sources update, you need to recompute downstream models in the right order, knowing exactly which input versions went into each output. As the number of models grows, keeping track of dependencies, provenance, and data quality becomes harder than the modeling itself.
DBPort is the orchestration layer on top of your warehouse that enforces governance into recomputable workflows. It tracks dependencies between your models and on external inputs, so you can build with the confidence that future updates will be picked up correctly — and that other models can pick up your results.
pip install dbport# Initialize a project
dbp init regional_trends --agency wifor --dataset emp__regional_trends
cd regional_trends
# Configure schema, inputs, and columns
dbp config model wifor.emp__regional_trends schema sql/create_output.sql
dbp config model wifor.emp__regional_trends input estat.nama_10r_3empers
# Run the full lifecycle: load inputs → execute model → publish output
dbp model run --version 2026-03-09 --timingFor programmatic control, the same workflow in Python:
from dbport import DBPort
with DBPort(agency="wifor", dataset_id="emp__regional_trends") as port:
port.schema("sql/create_output.sql")
port.load("estat.nama_10r_3empers", filters={"wstatus": "EMP"})
port.execute("sql/transform.sql")
port.publish(version="2026-03-09", params={"wstatus": "EMP"})- Dependency tracking — models produce datasets that feed other models. DBPort tracks these dependencies so you always know what depends on what across your organisation.
- Input provenance — every publish records exactly which input versions and snapshots were used. Trace any output back to the data that produced it.
- Recompute on change — snapshot-cached inputs detect when external sources update. Unchanged tables are skipped — only what's new gets reprocessed.
- Schema drift detection — declare the output shape upfront. Drift is caught before anything is written to the warehouse, not after.
- Versioned, resumable publishes — each publish records version, parameters, and row count. Interrupted runs resume from checkpoint. Re-running a completed version is a safe no-op.
- Committable state —
dbport.lockis TOML, credential-free, and safe to commit. It tracks schema, inputs, and version history for code review and CI.
DBPort reads credentials from environment variables:
export ICEBERG_REST_URI=https://catalog.example.com
export ICEBERG_CATALOG_TOKEN=your-token
export ICEBERG_WAREHOUSE=your-warehouseSee the credentials guide for all options.
Full docs at knifflig.github.io/dbport
- About DBPort — why it exists and who it's for
- Getting Started — installation, credentials, first run
- Concepts — inputs, outputs, metadata, lock file, hooks, versioning
- CLI Reference —
dbpcommand reference - Python API —
DBPortclass reference - Examples — complete CLI and Python workflows
See CONTRIBUTING.md for development setup and guidelines.
Apache License 2.0 — see LICENSE.