honker is a SQLite extension + language bindings that add Postgres-style NOTIFY/LISTEN semantics to SQLite, with built-in durable pub/sub, task queue, and event streams, without client polling or a daemon/broker. Any language that can SELECT load_extension('honker') gets the same features.
honker works by replacing application-level polling with a single-digit-µs PRAGMA data_version read on the database every 1ms, achieving push-like semantics and cross-process notifications with single-digit-millisecond delivery.
Some bindings expose experimental watcher backends, such as mmap of
<db>-shm in WAL mode or kernel file events. AUTO backend modes stay
conservative and use PRAGMA data_version. The stable semantics stay
the same: wake on committed updates, ignore rolled-back work, and
re-read SQLite state after every wake.
Alpha software. Better than experimental but not beta-quality yet.
SQLite is increasingly the database for shipped projects. Those inevitably require pubsub and a task queue. The usual answer is "add Redis + Celery." That works, but it introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker.
honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means INSERT INTO orders and queue.enqueue(...) commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index.
Prior art: pg_notify (fast triggers, no retry/visibility), Huey (SQLite-backed Python), pg-boss and Oban (the Postgres-side gold standards we're chasing on SQLite). If you already run Postgres, use those, as they are excellent.
honker ships as a Rust crate (honker, plus honker-core/honker-extension), a SQLite loadable extension, and language packages: Python (honker), Node (@russellthehippo/honker-node), Bun (@russellthehippo/honker-bun), Ruby (honker), Go, Elixir, C++, .NET / C#, and JVM / Kotlin packages. The on-disk layout is defined once in Rust; every binding is a thin wrapper around the loadable extension.
See Binding support for the current truth table: which bindings have typed queue/stream/listen/scheduler APIs, which ones have packaged-install proof, and what CI actually proves.
import honker
db = honker.open("app.db")
emails = db.queue("emails")
# Enqueue
emails.enqueue({"to": "alice@example.com"})
# Consume (worker process)
async for job in emails.claim("worker-1"):
send(job.payload)
job.ack()Any enqueue can be atomic with a business write. Rollback drops both.
with db.transaction() as tx:
tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42])
emails.enqueue({"to": "alice@example.com"}, tx=tx)Today:
- Notify/listen across processes on one
.dbfile - Work queues with retries, priority, delayed jobs, and a dead-letter table
- Any send can be atomic with your business write (commit together or roll back together)
- Single-digit millisecond cross-process reaction time, no polling
- Handler timeouts, declarative retries with exponential backoff
- Delayed jobs, task expiration, named locks, rate-limiting
- Time-trigger scheduling with a leader-elected scheduler:
5-field cron, 6-field cron, and
@every <n><unit>. Schedules are addressable rows you canpause,resume,update,list, andunschedulefrom any process or binding. - Cancel a pending or in-flight job by id (
queue.cancel(id)); read any job's row viaqueue.get_job(id). - Opt-in task result storage (
enqueuereturns an id, worker persists the return value, caller awaitsqueue.wait_result(id)) - Durable streams with per-consumer offsets and configurable flush interval
- SQLite loadable extension so any SQLite client can read the same tables
- Bindings: Python, Node.js, Rust, Go, Ruby, Bun, Elixir, .NET / C#, Java/JVM, and Kotlin
- Works inside an ORM-owned SQLite connection. SQLAlchemy, SQLModel, Django, Drizzle, Kysely, sqlx, GORM, ActiveRecord, Ecto, Hibernate, jOOQ, MyBatis, Exposed (guide)
Deliberately not built: task pipelines/chains/groups/chords, multi-writer replication, workflow orchestration with DAGs.
pip install honkerimport honker
db = honker.open("app.db")
emails = db.queue("emails")
with db.transaction() as tx:
tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42])
emails.enqueue({"to": "alice@example.com"}, tx=tx) # atomic with order
# Then in a worker, do:
async for job in emails.claim("worker-1"): # wakes on updates or due deadlines
try:
send(job.payload); job.ack()
except Exception as e:
job.retry(delay_s=60, error=str(e))claim() is an async iterator. Each iteration is one claim_batch(worker_id, 1). Wakes on any database update, or when the next claim-relevant deadline arrives (run_at for delayed jobs, or claim_expires_at for reclaims). Falls back to a 5 s paranoia poll only if the update watcher can't fire. For batched work, call claim_batch(worker_id, n) explicitly and ack with queue.ack_batch(ids, worker_id). Defaults: visibility 300 s.
If you want a function call to turn into an enqueued job without wrapping queue.enqueue by hand:
@emails.task(retries=3, timeout_s=30)
def send_email(to: str, subject: str) -> dict:
...
return {"sent_at": time.time()}
# Caller
r = send_email("alice@example.com", "Hi") # enqueues, returns a TaskResult
print(r.get(timeout=10)) # blocks until worker runs itWorker side, either in-process or as its own process:
python -m honker worker myapp.tasks:db --queue=emails --concurrency=4Auto-name is {module}.{qualname} (Huey/Celery convention). Explicit names with @emails.task(name="...") are recommended in prod so renames don't orphan pending jobs. Periodic tasks use @emails.periodic_task(crontab("0 3 * * *")). Full details in packages/honker/examples/tasks.py.
stream = db.stream("user-events")
with db.transaction() as tx:
tx.execute("UPDATE users SET name=? WHERE id=?", [name, uid])
stream.publish({"user_id": uid, "change": "name"}, tx=tx)
async for event in stream.subscribe(consumer="dashboard"):
await push_to_browser(event)Each named consumer tracks its own offset in the _honker_stream_consumers table. subscribe replays rows past the saved offset, then transitions to live delivery on commit wake. The iterator auto-saves offset at most every 1000 events or every 1 second (whichever first) so a high-throughput stream doesn't hammer the single-writer slot. Override with save_every_n= / save_every_s=, or set both to 0 to disable auto-save and call stream.save_offset(consumer, offset, tx=tx) yourself (atomic with whatever you just did in that tx). At-least-once: a crash re-delivers in-flight events up to the last flushed offset.
async for n in db.listen("orders"):
print(n.channel, n.payload)
with db.transaction() as tx:
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99.99])
tx.notify("orders", {"id": 42})Listeners attach at current MAX(id); history is not replayed. Use db.stream() if you need durable replay. The notifications table is not auto-pruned. Call db.prune_notifications(older_than_s=…, max_keep=…) from a scheduled task. Task payloads have to be valid JSON so a Python writer and Node reader can share a channel.
const { open } = require('@russellthehippo/honker-node');
const db = open('app.db');
// Atomic: business write + notify commit together
const tx = db.transaction();
tx.execute('INSERT INTO orders (id) VALUES (?)', [42]);
tx.notify('orders', { id: 42 });
tx.commit();
// updateEvents wakes on any commit to the db.
const ev = db.updateEvents();
let lastSeen = 0;
while (running) {
await ev.next();
const rows = db.query(
'SELECT id, payload FROM _honker_notifications WHERE id > ? ORDER BY id',
[lastSeen],
);
for (const row of rows) {
handle(JSON.parse(row.payload));
lastSeen = row.id;
}
}Current cross-language direct proof runs on every platform:
Python -> Node wake through Node updateEvents(), and Node -> Python
listener wake through Python listen().
import dev.honker.*;
try (Database db = Honker.open("app.db")) {
Queue emails = db.queue("emails");
long id = emails.enqueue("{\"to\":\"alice@example.com\"}");
Job job = emails.claimOne("worker-1").orElseThrow();
sendEmail(job.payloadJson());
job.ack();
emails.saveResult(id, "{\"ok\":true}");
emails.waitResultAsync(id, WaitOptions.timeout(Duration.ofSeconds(10)))
.thenAccept(this::handleResult);
}The JVM binding keeps JSON-library choice out of the core jar. Bring
Jackson, Gson, Moshi, JSON-B, or a hand-written mapper by implementing
JsonCodec<T>:
JsonCodec<Email> emailJson = new JsonCodec<>() {
public String encode(Email value) {
try {
return mapper.writeValueAsString(value);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public Email decode(String json) {
try {
return mapper.readValue(json, Email.class);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
};
try (Database db = Honker.open("app.db")) {
TypedQueue<Email> emails = db.queue("emails").typed(emailJson);
emails.enqueue(new Email("alice@example.com"));
TypedJob<Email> job = emails.claimOne("worker-1").orElseThrow();
sendEmail(job.payload());
job.ack();
}Java handles are AutoCloseable, and worker/listener/subscriber
options accept executors so application frameworks can own thread
pools and shutdown.
honker("app.db").use { db ->
val emails = db.queue("emails")
emails.enqueueJson("""{"to":"alice@example.com"}""")
emails.asFlow("worker-1").collect { job ->
sendEmail(job.payloadJson())
job.ack()
}
}Kotlin adds coroutine-friendly wrappers without duplicating the JVM runtime:
db.listen("orders")
.asFlow()
.collect { notification -> println(notification.payloadJson) }
db.stream("user-events")
.asFlow()
.collect { event -> push(event.payloadJson) }
val result = task.enqueueJson().await(Duration.ofSeconds(10))Typed helpers use the same JsonCodec<T> seam:
val emailJson = object : JsonCodec<Email> {
override fun encode(value: Email) = mapper.writeValueAsString(value)
override fun decode(json: String) = mapper.readValue(json, Email::class.java)
}
db.queue("emails").enqueue(Email("alice@example.com"), emailJson)
val job = db.queue("emails").asFlow("worker-1").first()
sendEmail(job.decode(emailJson))
job.ack().load ./libhonker_ext
SELECT honker_bootstrap();
INSERT INTO _honker_live (queue, payload) VALUES ('emails', '{"to":"alice"}');
SELECT honker_claim_batch('emails', 'worker-1', 32, 300); -- JSON array
SELECT honker_ack_batch('[1,2,3]', 'worker-1'); -- DELETEs; returns count
SELECT honker_sweep_expired('emails'); -- count moved to dead
SELECT honker_lock_acquire('backup', 'me', 60); -- 1 = got it, 0 = held
SELECT honker_lock_release('backup', 'me'); -- 1 = released
SELECT honker_rate_limit_try('api', 10, 60); -- 1 = under, 0 = at limit
SELECT honker_rate_limit_sweep(3600); -- drop windows >1h old
SELECT honker_cron_next_after('0 3 * * *', unixepoch()); -- 5-field cron
SELECT honker_cron_next_after('*/2 * * * * *', unixepoch()); -- 6-field cron
SELECT honker_cron_next_after('@every 5s', unixepoch()); -- interval schedule
SELECT honker_scheduler_register('nightly', 'backups',
'0 3 * * *', '"go"', 0, NULL); -- register periodic task
SELECT honker_scheduler_register('fast', 'backups',
'@every 5s', '"go"', 0, NULL); -- interval schedule
SELECT honker_scheduler_tick(unixepoch()); -- JSON: fires due
SELECT honker_scheduler_soonest(); -- min next_fire_at
SELECT honker_scheduler_unregister('nightly'); -- 1 = deleted
SELECT honker_queue_next_claim_at('emails'); -- next run_at / reclaim deadline
SELECT honker_stream_publish('orders', 'k', '{"id":42}'); -- returns offset
SELECT honker_stream_read_since('orders', 0, 1000); -- JSON array
SELECT honker_stream_save_offset('worker', 'orders', 42); -- monotonic upsert
SELECT honker_stream_get_offset('worker', 'orders'); -- offset or 0
SELECT honker_result_save(42, '{"ok":true}', 3600); -- save w/ 1h TTL
SELECT honker_result_get(42); -- value or NULL
SELECT honker_result_sweep(); -- prune expired
SELECT notify('orders', '{"id":42}');The extension shares _honker_live, _honker_dead, and _honker_notifications with the Python binding, so a Python worker can claim jobs any other language pushed via the extension. Schema compatibility is pinned by tests/test_extension_interop.py.
This repo includes the honker SQLite loadable extension and bindings
for Python, Node, Rust, Go, Ruby, Bun, Elixir, C++, .NET / C#,
Java/JVM, and Kotlin.
For most applications, SQLite alone is sufficient. There are already great libraries that leverage SQLite for durable messaging. Huey is the one honker draws the most from. This project is inspired by it and seeks to do something similar across languages and frameworks by moving package logic into a SQLite extension.
For Postgres-backed apps, pg_notify + pg-boss or Oban is the equivalent. This library is for apps where SQLite is the primary datastore.
The extension has three primitives that tie it together: ephemeral pub/sub (notify()), durable pub/sub with per-consumer offsets (stream()), at-least-once work queue (queue()). All three are INSERTs inside your transaction, which lets a task "send" be atomic with your business write, and rollback drops everything.
The explicit goal is to do NOTIFY/LISTEN semantics without application-level polling, to achieve single-digit ms reaction time. If you use your app's existing SQLite file containing business logic, it will notify workers on every commit to the database. This means that most triggers will not result in anything happening: instead, workers just read the message/queue with no result. This "overtriggering" is on purpose and is the tradeoff for push-like semantics and fast reaction time.
The language bindings default to journal_mode = WAL because it gives concurrent readers with one writer and efficient fsync batching (wal_autocheckpoint = 10000). Other journal modes (DELETE, TRUNCATE, MEMORY) still work. The wake path is PRAGMA data_version, which increments on every commit in every journal mode and is visible across processes. What you lose in non-WAL modes is WAL's concurrent-read-while-writing property; correctness and cross-process wake do not depend on WAL.
- One
.dbis the entire system (plus.db-wal/.db-shmsidecars if you've opted into WAL). You get every benefit of SQLite (embedded, local, durable, snapshot-able) that your app already uses. - Claim is one
UPDATE … RETURNINGvia a partial index; ack is oneDELETE. One writer at a time no matter the journal mode; concurrent readers come with WAL. - We poll
PRAGMA data_versionevery 1 ms to detect commits from any connection in any journal mode. The counter increments on every commit and on checkpoint, so WAL truncation, journal-file comings-and-goings, and exact-size collisions are all handled correctly. - SQLite has no wire protocol. Consumers must initiate reads; server-push is impossible. Wake signal = counter increment →
SELECT. - Transactions are cheap, so jobs, events, and notifications are rows in the caller's open
with db.transaction()block in an "outbox"-type pattern. - We use
PRAGMA data_versioninstead ofstat(2)on the WAL file or kernel watchers (FSEvents/inotify/kqueue).data_versionis a monotonic counter incremented by SQLite on every commit by any connection: it handles WAL truncation, clock skew, and rolled-back transactions correctly. Kernel watchers drop same-process writes on macOS, andstat(2)on(size, mtime)misses commits when the WAL is truncated then grows back to the same size.PRAGMA data_versionworks identically on Linux/macOS/Windows at ~1 ms granularity for negligible CPU. Cost: ~3.5 µs per query, ~3.5 ms/sec total at 1 kHz. - Single machine, single writer. SQLite's locking is designed for a single host. Two servers writing one
.dbover NFS will corrupt it. Shard by file, or switch to Postgres.
Honker does not support SQLite in-memory database filenames such as :memory:, file::memory:?cache=shared, or file:<name>?mode=memory&cache=shared. Bare :memory: creates a separate database per connection, which would split Honker's writer, readers, and update watcher across different databases. SQLite's shared-memory URI forms can share state across multiple connections, but only inside one process, so they do not exercise Honker's cross-process worker/listener contract.
For tests and feature environments, use a temporary file-backed .db. It is still cheap and disposable, but it preserves the same SQLite locking, wake, crash/reopen, and multi-process semantics that Honker relies on in production.
- One PRAGMA-poll thread per
Database, queriesdata_versionevery 1 ms - Counter change → fan out a tick to each subscriber's bounded channel
- Each subscriber runs
SELECT … WHERE id > last_seenagainst a partial index, yields rows, returns to wait - 100 subscribers = 1 poll thread
- Idle listeners run zero SQL queries
Idle cost is a single PRAGMA data_version query per millisecond per database. Listener count scales for free because the wake signal is a SQLite counter read instead of a polling query.
SharedUpdateWatcher (in honker-core) owns the poll thread and fans out to N subscribers via bounded SyncSender<()> channels keyed by subscriber id. Each db.update_events() call registers a subscriber and returns a handle whose Drop auto-unsubscribes, so a dropped listener causes the bridge thread's rx.recv() -> Err and exits cleanly.
Polling is the default. It's the only backend shipped in published wheels. Two opt-in alternatives exist behind Cargo features for source builds: kernel filesystem events, and an mmap read of SQLite's WAL index. Builds without the requested feature reject kernel / shm explicitly instead of silently substituting polling. Both experimental backends can give lower idle CPU or faster wakes, but they can also miss wakes or fire wakes you didn't ask for. All three watch for the database file being swapped under them; if that happens they shut down loudly — every subscriber sees an error from update_events() instead of hanging.
Binding support is tracked in BINDINGS.md. All maintained
bindings route blocking wake waits through the same honker-core
watcher, either in-process or through the shared extension ABI.
One thing changed for everyone, no opt-in needed: the polling backend now keeps its connection through transient SQLITE_BUSY / SQLITE_LOCKED errors during commits. Before, it would drop and reconnect, which could miss wakes on non-WAL journal modes (DELETE / TRUNCATE / PERSIST). Now it just retries the next tick.
Full reference — when to pick which, source-build flags, recovery patterns, what we haven't tested yet — at docs › Watcher backends.
_honker_live: pending + processing rows- Partial index:
(queue, priority DESC, run_at, id) WHERE state IN ('pending','processing') - Claim = one
UPDATE … RETURNINGvia that index - Ack = one
DELETE - Retry-exhausted →
_honker_dead(never scanned by claim path)
Partial-index on state means the claim hot path is bounded by the working-set size rather than the history size. A queue with 100k dead rows claims as fast as a queue with zero.
async for job in q.claim(id)yields one job at a time viaclaim_batch(id, 1)Job.ack()is oneDELETEin its own transaction. Return is an honest bool:Trueiff the claim was still valid,Falseif the visibility window elapsed and another worker reclaimed.- Wakes on database update from any process, or when the next
run_at/ reclaim deadline arrives; a 5 s paranoia poll is the only fallback.
For batched work, call claim_batch(worker_id, n) directly and ack with queue.ack_batch(ids, worker_id). The library doesn't hide batching behind the iterator. The per-tx cost and the at-most-once visibility semantics are easier to reason about when the API doesn't try to be clever.
notify()is a SQL scalar function registered on the writer connection- INSERTs into
_honker_notificationsunder the caller's open tx queue.enqueue(…, tx=tx)andstream.publish(…, tx=tx)do the same- Rollback drops the job/event/notification with the rest of the tx
This is the transactional outbox pattern, by default, without a library to install. Business write and side-effect enqueue commit or roll back together. There is no separate dispatch table and no separate dispatcher process: the side-effect row is the committed row, and any process watching the db picks it up within ~1 ms.
- A
data_versionchange wakes every subscriber on thatDatabase, not just the ones whose channel committed - Each wasted wake = one indexed SELECT (microseconds)
- A missed wake = a silent correctness bug
The library prefers waking ten listeners that don't care over missing one that does. Channel filtering happens in the SELECT path instead of the trigger notification. Many small queries are efficient in SQLite.
- Queue jobs persist until ack; retry-exhausted rows move to
_honker_dead - Stream events persist; each named consumer tracks its own offset
- Notify is fire-and-forget and not auto-pruned
The caller chooses retention per primitive. db.prune_notifications(older_than_s=…, max_keep=…) is a tool you invoke. This keeps retention policy visible in the caller's code instead of inherited from a library default.
- Rollback drops jobs/events/notifications with your business write (SQLite ACID).
- SIGKILL mid-tx is safe. SQLite's atomic-commit rollback on next open leaves no stale state (WAL or rollback journal, depending on mode). Verified in
tests/test_crash_recovery.py(subprocess killed pre-COMMIT,PRAGMA integrity_check == 'ok', fresh notifies still flow). - If a worker crashes mid-job, the claim expires after
visibility_timeout_s(default 300 s) and another worker reclaims.attemptsincrements. Aftermax_attempts(default 3), the row moves to_honker_dead. - Listeners offline during a prune miss the pruned events. For durable replay, use
db.stream(), which tracks per-consumer offsets.
Honker ships no framework plugins. API is small and the integration is a few lines of glue:
# FastAPI: enqueue in a request, run workers via lifespan.
@app.on_event("startup")
async def _start_workers():
async def worker_loop():
async for job in db.queue("emails").claim("worker"):
await honker._worker.run_task(
job, send_email, timeout=30, retries=3, backoff=2.0
)
app.state._worker = asyncio.create_task(worker_loop())
@app.post("/orders")
async def create_order(order: dict):
with db.transaction() as tx:
tx.execute("INSERT INTO orders (user_id) VALUES (?)", [order["user_id"]])
db.queue("emails").enqueue({"to": order["email"]}, tx=tx)
return {"ok": True}SSE endpoints are ~30 lines of async def stream(...): yield f"data: ...\n\n" over db.listen(channel) or db.stream(name).subscribe(...). For Django/Flask, run the worker as a dedicated CLI process (same pattern as Celery/RQ).
Load libhonker_ext on your ORM's connection and call the SQL functions inside the ORM's own transaction. The enqueue commits atomically with your business write.
# SQLAlchemy
@event.listens_for(engine, "connect")
def _load_honker(conn, _):
conn.enable_load_extension(True)
conn.load_extension("/path/to/libhonker_ext")
conn.execute("SELECT honker_bootstrap()")
with Session(engine) as s, s.begin():
s.add(Order(user_id=42))
s.execute(text("SELECT honker_enqueue(:q, :p, NULL, NULL, 0, 3, NULL)"),
{"q": "emails", "p": '{"to":"alice@example.com"}'})For JVM ORMs, use the ORM-owned JDBC connection inside the ORM transaction. Configure sqlite-jdbc with extension loading enabled, load the extension once per connection, then enqueue with SQL where the business write happens.
// Hibernate / JPA
entityManager.unwrap(Session.class).doWork(conn -> {
try (PreparedStatement stmt = conn.prepareStatement(
"SELECT honker_enqueue(?, ?, NULL, NULL, 0, 3, NULL)"
)) {
stmt.setString(1, "emails");
stmt.setString(2, "{\"to\":\"alice@example.com\"}");
stmt.executeQuery().close();
}
});// jOOQ
ctx.transaction { cfg ->
val tx = DSL.using(cfg)
tx.insertInto(ORDERS).set(ORDERS.USER_ID, 42).execute()
tx.fetchValue(
"SELECT honker_enqueue(?, ?, NULL, NULL, 0, 3, NULL)",
"emails",
"""{"to":"alice@example.com"}""",
)
}// Exposed
transaction {
Orders.insert { it[userId] = 42 }
exec(
"SELECT honker_enqueue('emails', '{\"to\":\"alice@example.com\"}', NULL, NULL, 0, 3, NULL)"
)
}Workers run as a separate process using honker.open("app.db") or
Honker.open("app.db"). The commit watcher wakes on commits from any
connection to the file. See Using with an ORM
for Django, SQLModel, Drizzle, Kysely, sqlx, GORM, ActiveRecord, Ecto,
a typed-payload TypedQueue[T] wrapper pattern for SQLModel/Pydantic,
and the Prisma caveat.
Handles thousands of messages per second on a modern laptop, with cross-process wake latency bounded by the 1 ms poll cadence (~1–2 ms median on M-series). Run bench/wake_latency_bench.py and bench/real_bench.py to measure on your hardware.
Layout:
honker-core/ # Rust rlib shared across all bindings (in-tree, published on crates.io)
honker-extension/ # SQLite loadable extension (cdylib, published on crates.io)
packages/
honker/ # Python package (PyO3 cdylib + Queue/Stream/Outbox/Scheduler)
honker-node/ # napi-rs Node.js binding
honker-rs/ # ergonomic Rust wrapper
honker-go/ # Go binding
honker-ruby/ # Ruby binding
honker-bun/ # Bun binding
honker-ex/ # Elixir binding
honker-cpp/ # C++ binding
honker-dotnet/ # .NET / C# binding
honker-jvm/ # JVM / Java-compatible binding
honker-kotlin/ # Kotlin convenience wrapper
tests/ # integration tests (cross-package)
bench/ # benches
site/ # honker.dev (Astro) [git submodule]
The maintained language bindings live in-tree so one feature can update
core behavior, docs, tests, and binding surfaces together. The core
engine is still Rust (honker-core + honker-extension); the language
packages are first-class wrappers published independently to their
ecosystems. site/ remains a separate docs-site submodule.
make test # default: rust + python + node (fast, ~10s)
make test-python-slow # soak + real-time cron tests (~2 min)
make test-jvm # JVM binding tests
make test-kotlin # JVM + Kotlin wrapper tests
make test-jvm-consumer # clean Maven consumer + packaged native proof
make test-all # everything including slow marks
make build # PyO3 maturin develop + loadable extension
python bench/wake_latency_bench.py --samples 500
python bench/real_bench.py --workers 4 --enqueuers 2 --seconds 15
python bench/ext_bench.pyOne-time: make install-coverage-deps (installs coverage.py + cargo-llvm-cov).
make coverage # both HTML reports into coverage/
make coverage-python # honker python paths
make coverage-rust # honker-core Rust unit testsPython coverage reflects the full honker test suite (~92% of packages/honker/). Rust coverage reflects only cargo test. Many honker_ops.rs paths (honker_enqueue, honker_claim_batch, etc.) are only exercised via the Python test suite and won't show up in the Rust report. Combined cross-language coverage is non-trivial (LLVM profile-data merging across PyO3 boundaries) and is deferred.
Apache 2.0. See LICENSE.
