Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .github/instructions/instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---

Check failure on line 1 in .github/instructions/instructions.md

View check run for this annotation

Trunk.io / Trunk Check

prettier

Incorrect formatting, autoformat by running 'trunk fmt'
applyTo: '**'
---

# Basic instructions

Always run `golagci-lint` and staticcheck after completing a Golang task.
Verify the quality of the code you provide, including repetitions, flaws, and ways to modernise the approach to ensure consistency. Adopt a development method and stick consistently to it.
Parse the Makefile in the project, and you will find the commands you need to lint, polish, and test the code.

Always document the solutions we find, and where applicable, use the ./docs folder for extensive documentation.

## Toolset

- [Makefile](../../Makefile).
- Always run: `make lint`
39 changes: 38 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# HyperCache

Check failure on line 1 in README.md

View check run for this annotation

Trunk.io / Trunk Check

prettier

Incorrect formatting, autoformat by running 'trunk fmt'

[![Go](https://github.com/hyp3rd/hypercache/actions/workflows/go.yml/badge.svg)][build-link] [![CodeQL](https://github.com/hyp3rd/hypercache/actions/workflows/codeql.yml/badge.svg)][codeql-link] [![golangci-lint](https://github.com/hyp3rd/hypercache/actions/workflows/golangci-lint.yml/badge.svg)][golangci-lint-link]

Expand All @@ -10,7 +10,7 @@

- Tunable expiration and eviction intervals (or fully proactive eviction when the eviction interval is set to `0`).
- Debounced & coalesced expiration trigger channel to avoid thrashing.
- Non-blocking manual `TriggerEviction()` signal.
- Non-blocking manual `TriggerEviction(context.Context)` signal.
- Serializer‑aware memory accounting (item size reflects the backend serialization format when available).
- Multiple eviction algorithms with the ability to register custom ones.
- Multiple stats collectors (default histogram) and middleware hooks.
Expand Down Expand Up @@ -61,6 +61,8 @@
- GET /config – sanitized runtime config (now includes replication + virtual node settings when using DistMemory)
- GET /dist/metrics – distributed backend forwarding / replication counters (DistMemory only)
- GET /dist/owners?key=K – current ring owners (IDs) for key K (DistMemory only, debug)
- GET /internal/merkle – Merkle tree snapshot (DistMemory experimental anti-entropy)
- GET /internal/keys – Full key enumeration (debug / anti-entropy fallback; expensive)
- GET /cluster/members – membership snapshot (id, address, state, incarnation, replication factor, virtual nodes)
- GET /cluster/ring – ring vnode hashes (debug / diagnostics)
- POST /evict – trigger eviction cycle
Expand Down Expand Up @@ -181,8 +183,14 @@
| `WithManagementHTTP` | Start optional management HTTP server. |
| `WithDistReplication` | (DistMemory) Set replication factor (owners per key). |
| `WithDistVirtualNodes` | (DistMemory) Virtual nodes per physical node for consistent hashing. |
| `WithDistMerkleChunkSize` | (DistMemory) Keys per Merkle leaf chunk (power-of-two recommended). |
| `WithDistMerkleAutoSync` | (DistMemory) Interval for background Merkle sync (<=0 disables). |
| `WithDistMerkleAutoSyncPeers` | (DistMemory) Limit peers synced per auto-sync tick (0=all). |
| `WithDistListKeysCap` | (DistMemory) Cap number of keys fetched via fallback enumeration. |
| `WithDistNode` | (DistMemory) Explicit node identity (id/address). |
| `WithDistSeeds` | (DistMemory) Static seed addresses to pre-populate membership. |
| `WithDistTombstoneTTL` | (DistMemory) Retain delete tombstones for this duration before compaction (<=0 = infinite). |
| `WithDistTombstoneSweep` | (DistMemory) Interval to run tombstone compaction (<=0 disables). |

*ARC is experimental (not registered by default).

Expand All @@ -204,10 +212,39 @@
- Ownership enforcement (non‑owners forward to primary).
- Replica fan‑out on writes (best‑effort) & replica removals.
- Read‑repair when a local owner misses but another replica has the key.
- Basic delete semantics with tombstones: deletions propagate as versioned tombstones preventing
resurrection during anti-entropy (tombstone retention is in‑memory, no persistence yet).
- Tombstone versioning uses a per-process monotonic counter when no prior item version exists (avoids time-based unsigned casts).
- Remote pull sync will infer a tombstone when a key present locally is absent remotely and no local tomb exists (anti-resurrection guard).
- DebugInject intentionally clears any existing tombstone for that key (test helper / simulating authoritative resurrection with higher version).
- Tombstone TTL + periodic compaction: configure with `WithDistTombstoneTTL` / `WithDistTombstoneSweep`; metrics track active & purged counts.
- Metrics exposed via management endpoints (`/dist/metrics`, `/dist/owners`, `/cluster/members`, `/cluster/ring`).
- Includes Merkle phase timings (fetch/build/diff nanos) and counters for keys pulled during anti-entropy.
- Tombstone metrics: `TombstonesActive`, `TombstonesPurged`.

Planned next steps (roadmap excerpts): network transport abstraction, quorum reads/writes, versioning (vector clocks or lamport), failure detection / node states, rebalancing & anti‑entropy sync.

### Roadmap / PRD Progress Snapshot

| Area | Status |
|------|--------|
| Core in-process sharding | Complete (static ring) |
| Replication fan-out | Implemented (best-effort) |
| Read-repair | Implemented |
| Merkle anti-entropy | Implemented (pull-based) |
| Merkle performance metrics | Implemented (fetch/build/diff nanos) |
| Remote-only key enumeration fallback | Implemented with optional cap (`WithDistListKeysCap`) |
| Delete semantics (tombstones) | Implemented (no compaction yet) |
| Tombstone compaction / TTL | Planned |
| Quorum read/write consistency | Partially scaffolded (consistency levels enum) |
| Failure detection / heartbeat | Experimental heartbeat present |
| Membership changes / dynamic rebalancing | Not yet |
| Network transport (HTTP partial) | Basic HTTP management + fetch merkle/keys; full RPC TBD |
| Tracing spans (distributed ops) | Planned |
| Metrics exposure | Basic + Merkle phase metrics |
| Persistence | Not in scope yet |
| Benchmarks & tests | Extensive unit + benchmark coverage |

Example minimal setup:

```go
Expand Down
4 changes: 4 additions & 0 deletions cspell.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ words:
- Fprintln
- freqs
- funlen
- gerr
- gitversion
- GITVERSION
- goccy
Expand All @@ -49,18 +50,21 @@ words:
- ints
- ireturn
- Itemm
- keyf
- lamport
- LFUDA
- localmodule
- logrus
- memprofile
- Merkle
- Mgmt
- msgpack
- mvdan
- nestif
- Newf
- nolint
- nonamedreturns
- nosec
- NOVENDOR
- paralleltest
- Pipeliner
Expand Down
30 changes: 30 additions & 0 deletions pkg/backend/dist_http_server.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ func (s *distHTTPServer) start(ctx context.Context, dm *DistMemory) error { //no
s.registerGet(ctx, dm)
s.registerRemove(ctx, dm)
s.registerHealth()
s.registerMerkle(ctx, dm)

return s.listen(ctx)
}
Expand Down Expand Up @@ -112,6 +113,35 @@ func (s *distHTTPServer) registerHealth() { //nolint:ireturn
s.app.Get("/health", func(fctx fiber.Ctx) error { return fctx.SendString("ok") })
}

func (s *distHTTPServer) registerMerkle(_ context.Context, dm *DistMemory) { //nolint:ireturn
s.app.Get("/internal/merkle", func(fctx fiber.Ctx) error {
tree := dm.BuildMerkleTree()

return fctx.JSON(fiber.Map{
"root": tree.Root,
"leaf_hashes": tree.LeafHashes,
"chunk_size": tree.ChunkSize,
})
})

// naive keys listing for anti-entropy (testing only). Not efficient for large datasets.
s.app.Get("/internal/keys", func(fctx fiber.Ctx) error {
var keys []string
for _, shard := range dm.shards {
if shard == nil {
continue
}

ch := shard.items.IterBuffered()
for t := range ch {
keys = append(keys, t.Key)
}
}

return fctx.JSON(fiber.Map{"keys": keys})
})
}

func (s *distHTTPServer) listen(ctx context.Context) error { //nolint:ireturn
lc := net.ListenConfig{}

Expand Down
Loading
Loading