This plugin ports dbt functionality to Timeplus Proton.
Use your favorite Python package manager to install the app from PyPI, e.g.
pip install dbt-timeplusFollow the dbt Documentation to install dbt with pip.
python3.10 -m venv proton-dbt-env
source proton-dbt-env/bin/activate
# Installs matching versions for local dev/tests
pip install -r dev_requirements.txtThen run pip install -e . to install the current dev code.
Testing:
pytest tests/unitruns fast unit tests (no DB needed).- Functional tests require a running Timeplus endpoint. Export env vars or use
tests/test.env:DBT_TEST_HOST(defaultlocalhost)DBT_TEST_PORT(default8463)DBT_TEST_USER(defaultdefault)DBT_TEST_PASSWORD(default empty)DBT_TEST_SCHEMA(defaultdefault)
- Run
pytest tests/functionalfor functional tests. - Run
pytest tests/integration/timeplus.dbtspecfor integration tests.
- If you run Kafka/ClickHouse locally (e.g., via Docker Compose), set the relevant environment variables for your setup, then run
pytest -k externalto execute only those tests. Seetests/test.env.samplefor the full variable list.
Note: host:port values depend on your environment and are not enforced by the adapter. Use the ports your services expose.
Typical defaults:
- ClickHouse native:
9000(plain),9440(TLS). HTTP:8123/8443. - Kafka brokers:
9092(plain), others depending on your deployment.
Tip: copy tests/test.env.sample to tests/test.env and edit for local runs (pytest-dotenv loads it automatically).
- Python: 3.10, 3.11, 3.12
- dbt-core: 1.10.x (pinned to
1.10.13) - proton-driver:
>=0.2.13
- Table materialization
- View materialization
- Incremental materialization
- Seeds
- Sources
- Docs generate
- Tests
- Snapshots (experimental)
- Ephemeral materialization
The dbt model database.schema.table is not compatible with Timeplus because Timeplus does not support a schema.
So we use a simple model schema.table, where schema is the Timeplus database.
Timeplus streams are streaming by default. To avoid long-running queries in dbt models and tests, wrap streaming sources with table(...) when selecting, for example:
select window_end, cid, count() as cnt
from tumble(table(car_live_data), 1s)
group by window_end, cid
This produces a bounded snapshot for deterministic builds.
| Option | Description | Default |
|---|---|---|
| engine | Stream engine used when creating streams | Stream(1, 1, rand()) |
| order_by | Column(s) or expression(s) used for ordering | to_start_of_hour(_tp_time) |
| partition_by | Partition expression for stream | to_YYYYMMDD(_tp_time) |
your_profile_name:
target: dev
outputs:
dev:
type: timeplus
schema: [database name] # default default
host: [db.url.timeplus] # default localhost
# optional
port: [port] # default 8463
user: [user]
password: [abc123]
verify: [verify] # default False
secure: [secure] # default False
connect_timeout: [10] # default 10
send_receive_timeout: [300] # default 300
sync_request_timeout: [5] # default 5
compress_block_size: [1048576] # default 1048576
compression: ['lz4'] # default '' (disable)
Create a materialized view that writes into a target stream and optionally applies settings:
{{ config(materialized='materialized_view', into='mv_target', settings='checkpoint_interval=5') }}
select window_start as win_start, s, sum(i) as total
from tumble(table(rd), 2s)
group by window_start, s
The adapter includes tests/examples for Kafka and ClickHouse sinks. Export the environment variables shown above and run pytest -k external to execute only those tests.
- Package name:
dbt-timeplus(renamed fromdbt-proton). - Adapter version mirrors dbt minor (
1.10.*). - The bundled macro package version in
dbt/include/timeplus/dbt_project.ymlis kept in sync as a convention.