Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
ae51d8c
Implement Hachi PCS protocol with required primitives (#1)
omibo Feb 28, 2026
4980e1a
Implement Batched Sumcheck and Gruen EQ (#2)
omibo Feb 28, 2026
36455ab
Parallelism, CRT-NTT, extension fields, streaming commitment, fused s…
quangvdao Mar 2, 2026
d5cae31
perf: optimize proving pipeline — multi-level folding, i8 digit pipel…
quangvdao Mar 4, 2026
850e526
fix + refactor: delta decomposition fix, D-agnostic storage, dispatch…
quangvdao Mar 4, 2026
cfe54a7
Labrador/Greyhound recursive lattice proof protocol (#6)
omibo Mar 4, 2026
937fb8f
feat: add disk-persistence feature for setup caching (#8)
quangvdao Mar 4, 2026
402cb7d
chore: harden CI workflow and clean up repo config (#9)
quangvdao Mar 4, 2026
3146a19
perf: gate debug diagnostics behind cfg(debug_assertions) and factor …
quangvdao Mar 6, 2026
cf8632e
perf: streamline recursive Hachi proving path (#11)
quangvdao Mar 7, 2026
24e6499
perf: tighten sumcheck and recursive witness paths (#12)
quangvdao Mar 12, 2026
ed0326e
Feat: Integrate Labrador to Hachi + Optimizing Labrador (#13)
omibo Mar 12, 2026
eb8bea7
perf: reuse z_pre witness data across ring switch (#14)
quangvdao Mar 13, 2026
e62e434
Replace eprintln! with structured logs (#15)
omibo Mar 14, 2026
7909a0f
Opt/labrador internal (#16)
omibo Mar 16, 2026
f9bc066
perf: add d64 partial-split NTT prototype (#18)
quangvdao Mar 19, 2026
91be31e
perf: split Hachi sumcheck into two stages (#17)
quangvdao Mar 19, 2026
d482aa5
ci/docs: benchmark 32-variable 1-of-256 one-hot and document D=64 SIS…
quangvdao Mar 19, 2026
351e56e
perf: scalar-field aggregation and faster linear garbage computation …
omibo Mar 19, 2026
7ed983d
feat: add D64 onehot scheduling infrastructure
quangvdao Mar 19, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 15 additions & 12 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@ on:
pull_request:
branches: ["**", main]

permissions:
contents: read

env:
RUSTFLAGS: -D warnings
CARGO_TERM_COLOR: always
Expand All @@ -19,8 +22,8 @@ jobs:
name: Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions-rust-lang/setup-rust-toolchain@a0b538fa0b742a6aa35d6e2c169b4bd06d225a98 # v1
with:
components: rustfmt
- name: Check formatting
Expand All @@ -30,21 +33,21 @@ jobs:
name: Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions-rust-lang/setup-rust-toolchain@a0b538fa0b742a6aa35d6e2c169b4bd06d225a98 # v1
with:
components: clippy
- name: Clippy (all features)
run: cargo clippy -q --message-format=short --all-features --all-targets -- -D warnings
run: cargo clippy --all --all-targets --all-features -- -D warnings
- name: Clippy (no default features)
run: cargo clippy -q --message-format=short --no-default-features --lib -- -D warnings
run: cargo clippy --all --all-targets --no-default-features -- -D warnings

doc:
name: Documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions-rust-lang/setup-rust-toolchain@a0b538fa0b742a6aa35d6e2c169b4bd06d225a98 # v1
- name: Build documentation
run: cargo doc -q --no-deps --all-features
env:
Expand All @@ -54,9 +57,9 @@ jobs:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions-rust-lang/setup-rust-toolchain@a0b538fa0b742a6aa35d6e2c169b4bd06d225a98 # v1
- name: Install cargo-nextest
uses: taiki-e/install-action@nextest
uses: taiki-e/install-action@f092c064826410a38929a5791d2c0225b94432fe # nextest
- name: Run tests
run: cargo nextest run -q --all-features
run: cargo nextest run --all-features
277 changes: 277 additions & 0 deletions .github/workflows/onehot-bench.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,277 @@
name: Onehot 32 Variables Benchmark

on:
push:
branches: [main]
pull_request:
branches: ["**", main]
workflow_dispatch:

permissions:
actions: read
contents: read
issues: write
pull-requests: write

env:
CARGO_TERM_COLOR: always
HACHI_BENCH_ARTIFACT_NAME: onehot-bench-32-variables-data
HACHI_BENCH_MODE: onehot
HACHI_BENCH_NUM_VARS: 32

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:
bench:
name: Onehot 32 Variables (1-of-256)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0

- name: Initialize benchmark paths
run: |
echo "HACHI_BENCH_ARTIFACT_DIR=$RUNNER_TEMP/onehot-bench-artifact" >> "$GITHUB_ENV"
echo "HACHI_BENCH_MAIN_BASELINE_DIR=$RUNNER_TEMP/onehot-bench-main-baseline" >> "$GITHUB_ENV"
echo "HACHI_BENCH_PREVIOUS_RUN_DIR=$RUNNER_TEMP/onehot-bench-previous-run" >> "$GITHUB_ENV"
echo "HACHI_BENCH_REPORT=$RUNNER_TEMP/onehot-bench.md" >> "$GITHUB_ENV"
echo "HACHI_BENCH_COMMENT=$RUNNER_TEMP/onehot-bench-comment.md" >> "$GITHUB_ENV"
echo "HACHI_BENCH_SOURCE_SHA=$GITHUB_SHA" >> "$GITHUB_ENV"
echo "HACHI_BENCH_SOURCE_BRANCH=${{ github.head_ref || github.ref_name }}" >> "$GITHUB_ENV"
echo "HACHI_BENCH_BASE_REF=" >> "$GITHUB_ENV"
echo "HACHI_BENCH_MERGE_BASE_SHA=" >> "$GITHUB_ENV"
{
echo "HACHI_BENCH_SOURCE_SUBJECT<<EOF"
git log -1 --format=%s
echo "EOF"
} >> "$GITHUB_ENV"

- name: Determine PR benchmark merge base
if: github.event_name == 'pull_request'
run: |
echo "HACHI_BENCH_BASE_REF=${{ github.event.pull_request.base.ref }}" >> "$GITHUB_ENV"
merge_base_sha="$(git merge-base "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}" || true)"
if [ -n "$merge_base_sha" ]; then
echo "HACHI_BENCH_MERGE_BASE_SHA=$merge_base_sha" >> "$GITHUB_ENV"
fi

- uses: actions-rust-lang/setup-rust-toolchain@a0b538fa0b742a6aa35d6e2c169b4bd06d225a98 # v1

- name: Build profile example
run: cargo build --release --quiet --example profile

- name: Run onehot 32 variables benchmark (1-of-256)
run: |
python3 scripts/onehot_bench_report.py run \
--binary ./target/release/examples/profile \
--output-dir "$HACHI_BENCH_ARTIFACT_DIR" \
--mode "$HACHI_BENCH_MODE" \
--num-vars "$HACHI_BENCH_NUM_VARS"

- name: Upload benchmark artifact
if: always()
continue-on-error: true
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: ${{ env.HACHI_BENCH_ARTIFACT_NAME }}
path: ${{ env.HACHI_BENCH_ARTIFACT_DIR }}
if-no-files-found: warn
retention-days: 30

- name: Determine comparison baseline artifacts
if: always() && github.event_name == 'pull_request'
continue-on-error: true
id: bench-baselines
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const { owner, repo } = context.repo;
const artifactName = process.env.HACHI_BENCH_ARTIFACT_NAME;
const workflowName = process.env.GITHUB_WORKFLOW;
const currentRunId = Number(process.env.GITHUB_RUN_ID);
const currentSha = process.env.GITHUB_SHA;
const pullRequest = context.payload.pull_request;
const headRef = pullRequest.head.ref;
const baseRef = process.env.HACHI_BENCH_BASE_REF;
const mergeBaseSha = process.env.HACHI_BENCH_MERGE_BASE_SHA;

function setBaselineOutput(prefix, run, label) {
core.setOutput(`${prefix}-run-id`, run ? String(run.id) : '');
core.setOutput(`${prefix}-sha`, run ? run.head_sha : '');
core.setOutput(`${prefix}-label`, run ? label : '');
}

async function firstRunWithArtifact(runs) {
for (const run of runs) {
if (run.id === currentRunId) {
continue;
}
if (run.name !== workflowName || run.conclusion !== 'success') {
continue;
}
const artifactsResponse = await github.rest.actions.listWorkflowRunArtifacts({
owner,
repo,
run_id: run.id,
per_page: 100,
});
const artifact = artifactsResponse.data.artifacts.find(candidate =>
candidate.name === artifactName && !candidate.expired
);
if (artifact) {
return run;
}
}
return null;
}

const prRunsResponse = await github.rest.actions.listWorkflowRunsForRepo({
owner,
repo,
event: 'pull_request',
branch: headRef,
status: 'completed',
per_page: 100,
});
const previousPrCandidates = prRunsResponse.data.workflow_runs.filter(run => {
if (run.head_sha === currentSha) {
return false;
}
const prs = run.pull_requests || [];
return prs.length === 0 || prs.some(pr => pr.number === pullRequest.number);
});
const previousPrRun = await firstRunWithArtifact(previousPrCandidates);
if (!previousPrRun) {
core.info('No previous PR baseline artifact found.');
}
setBaselineOutput('previous', previousPrRun, 'the previous successful PR update');

const baseRunsResponse = await github.rest.actions.listWorkflowRunsForRepo({
owner,
repo,
event: 'push',
branch: baseRef,
status: 'completed',
per_page: 100,
});
const baseCandidates = baseRunsResponse.data.workflow_runs.filter(run =>
run.name === workflowName && run.conclusion === 'success'
);
const exactMergeBaseCandidates = mergeBaseSha
? baseCandidates.filter(run => run.head_sha === mergeBaseSha)
: [];
const exactMergeBaseRun =
exactMergeBaseCandidates.length > 0
? await firstRunWithArtifact(exactMergeBaseCandidates)
: null;
const fallbackBaseCandidates = mergeBaseSha
? baseCandidates.filter(run => run.head_sha !== mergeBaseSha)
: baseCandidates;
const mainRun =
exactMergeBaseRun ?? await firstRunWithArtifact(fallbackBaseCandidates);
if (!mainRun) {
core.info('No main baseline artifact found.');
}
const mainLabel =
mainRun && Boolean(mergeBaseSha) && mainRun.head_sha === mergeBaseSha
? `merge-base on \`${baseRef}\``
: `the latest successful \`${baseRef}\` run`;
setBaselineOutput('main', mainRun, mainLabel);

- name: Download main baseline artifact
if: always() && steps.bench-baselines.outputs.main-run-id != ''
continue-on-error: true
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: ${{ env.HACHI_BENCH_ARTIFACT_NAME }}
path: ${{ env.HACHI_BENCH_MAIN_BASELINE_DIR }}
run-id: ${{ steps.bench-baselines.outputs.main-run-id }}
github-token: ${{ github.token }}

- name: Download previous run artifact
if: always() && steps.bench-baselines.outputs.previous-run-id != ''
continue-on-error: true
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: ${{ env.HACHI_BENCH_ARTIFACT_NAME }}
path: ${{ env.HACHI_BENCH_PREVIOUS_RUN_DIR }}
run-id: ${{ steps.bench-baselines.outputs.previous-run-id }}
github-token: ${{ github.token }}

- name: Render benchmark report
if: always()
continue-on-error: true
env:
HACHI_BENCH_MAIN_BASELINE_LABEL: ${{ steps.bench-baselines.outputs.main-label }}
HACHI_BENCH_MAIN_BASELINE_SHA: ${{ steps.bench-baselines.outputs.main-sha }}
HACHI_BENCH_PREVIOUS_BASELINE_LABEL: ${{ steps.bench-baselines.outputs.previous-label }}
HACHI_BENCH_PREVIOUS_BASELINE_SHA: ${{ steps.bench-baselines.outputs.previous-sha }}
run: |
if [ ! -f "$HACHI_BENCH_ARTIFACT_DIR/summary.json" ]; then
echo "Benchmark summary not found; benchmark step likely failed." > "$HACHI_BENCH_REPORT"
else
python3 scripts/onehot_bench_report.py render \
"$HACHI_BENCH_ARTIFACT_DIR/summary.json" \
--main-baseline-dir "$HACHI_BENCH_MAIN_BASELINE_DIR" \
--previous-baseline-dir "$HACHI_BENCH_PREVIOUS_RUN_DIR" > "$HACHI_BENCH_REPORT"
fi
cp "$HACHI_BENCH_REPORT" "$HACHI_BENCH_ARTIFACT_DIR/report.md"
cat "$HACHI_BENCH_REPORT" >> "$GITHUB_STEP_SUMMARY"
{
echo '<!-- hachi-onehot-bench-report -->'
echo
cat "$HACHI_BENCH_REPORT"
echo
echo '> Posted by Cursor assistant (model: GPT-5.4) on behalf of the user (Quang Dao) with approval.'
} > "$HACHI_BENCH_COMMENT"

- name: Upsert benchmark PR comment
if: >-
always() &&
github.event_name == 'pull_request' &&
github.event.pull_request.head.repo.full_name == github.repository
continue-on-error: true
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
const fs = require('fs');
const commentPath = process.env.HACHI_BENCH_COMMENT;
if (!fs.existsSync(commentPath)) {
core.info('No benchmark comment body found.');
return;
}

const marker = '<!-- hachi-onehot-bench-report -->';
const body = fs.readFileSync(commentPath, 'utf8');
const { owner, repo } = context.repo;
const issue_number = context.issue.number;
try {
const comments = await github.paginate(
github.rest.issues.listComments,
{ owner, repo, issue_number, per_page: 100 }
);
const existing = comments.find(comment =>
comment.user?.login === 'github-actions[bot]' && comment.body?.includes(marker)
);

if (existing) {
await github.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body,
});
} else {
await github.rest.issues.createComment({
owner,
repo,
issue_number,
body,
});
}
} catch (error) {
core.warning(`Skipping benchmark PR comment upsert: ${error.message}`);
}
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,7 @@
.urs

PUBLISH_CHECKLIST.md

profile_traces/

.cursor/
35 changes: 35 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# AGENTS.md

**Compatibility notice (explicit): This repo makes NO backward-compatibility guarantees. Breaking changes are allowed and expected.**

## Project Overview

Hachi is a lattice-based polynomial commitment scheme (PCS) with transparent setup and post-quantum security. Built in Rust. Intended to replace Dory in Jolt.

## Essential Commands

```bash
cargo clippy --all --message-format=short -q -- -D warnings
cargo fmt -q
cargo test # no nextest yet
```

## Crate Structure

Two workspace members: `hachi-pcs` (root) and `derive` (proc macros).

- `src/primitives/` — Core traits: `FieldCore`, `Module`, `MultilinearLagrange`, `Transcript`, serialization
- `src/algebra/` — Concrete backends: prime fields, extension fields, cyclotomic rings, NTT, domains
- `src/protocol/` — Protocol layer: commitment, prover, verifier, opening (ring-switch), challenges, transcript
- `src/error.rs` — Error types

## Key Abstractions

- `CommitmentScheme` / `StreamingCommitmentScheme` — top-level PCS traits
- `FieldCore` + `PseudoMersenneField` + `Module` — arithmetic over lattice-friendly fields and rings
- `MultilinearLagrange` — multilinear polynomial in Lagrange basis
- `Transcript` — Fiat-Shamir

## Feature Flags

- `parallel` — Rayon parallelization
1 change: 1 addition & 0 deletions CLAUDE.md
Loading
Loading