Skip to content

Conversation

@sephynox
Copy link
Contributor

@sephynox sephynox commented Jan 26, 2026

Summary

This PR introduces a new storage anchor service:

  • API with PUT, GET, DELETE, and SEARCH operations
  • Path-based namespacing (/user//...) with owner-enforced access control
  • Client-side encryption via EncryptedContainer with optional author signing (RFC 5652 CMS SignerInfo)
  • Public/private visibility with pre-signed URLs for anonymous public access
  • Quota enforcement with configurable limits per object size, object count, and total storage
  • Namespace validators for content validation (e.g., user icons)

Key Changes

Encrypted Container (src/lib/encrypted-container.ts)

  • Added RFC 5652 CMS SignerInfo support for author signatures
  • Added structured error handling with EncryptedContainerError and typed error codes

Storage Service (src/services/storage/)

  • Server (server.ts): HTTP server implementation with authenticated endpoints
  • Client (client.ts): Client SDK with provider discovery
  • Memory Backend (drivers/memory.ts): In-memory backend with atomic operations (copy-on-write)

@sephynox sephynox self-assigned this Jan 26, 2026
@sephynox sephynox added the enhancement New feature or request label Jan 26, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces a new Storage Anchor service with HTTP server + client support, including path-based object storage, quota reporting, namespace validation, and public access via pre-signed URLs. It also extends the resolver metadata model and enhances EncryptedContainer with structured error handling and optional signing.

Changes:

  • Added Storage Anchor server routes (put/get/delete/search/quota/public) with quota/validator hooks and service metadata publishing.
  • Added Storage Anchor client/provider for discovery via resolver, CRUD operations, search/quota, and pre-signed public URLs.
  • Extended core libraries: resolver types/lookup, error deserialization mapping, and EncryptedContainer error/signing support (with tests).

Reviewed changes

Copilot reviewed 15 out of 16 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/services/storage/server.ts Implements Storage Anchor HTTP API endpoints and publishes storage service metadata
src/services/storage/server.test.ts Adds basic server/serviceMetadata and error-path tests
src/services/storage/lib/validators.ts Adds namespace validator framework + built-in icon validator
src/services/storage/lib/validators.test.ts Tests validator matching and icon validation logic
src/services/storage/common.ts Defines storage types, requests/responses, signing data, and storage-specific errors
src/services/storage/common.test.ts Adds path utility tests and an in-memory backend for test usage
src/services/storage/client.ts Adds resolver-driven client/provider with signed requests and public URL generation
src/services/storage/client.test.ts Adds integration-style tests for provider discovery and CRUD/search/quota/public flows
src/lib/utils/tests/node.ts Adds helper to publish resolver metadata in tests
src/lib/resolver.ts Extends resolver metadata schema and lookup logic for storage services
src/lib/resolver.test.ts Refactors to reuse shared resolver-metadata helper
src/lib/error.ts Registers storage service errors for JSON deserialization
src/lib/encrypted-container.ts Adds structured error type/codes and optional signing (v3) support
src/lib/encrypted-container.test.ts Updates tests for buffer conversions + adds error/signing coverage
src/client/index.ts Exposes Storage client entrypoint alongside other service clients

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 15 out of 16 changed files in this pull request and generated 6 comments.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 15 out of 16 changed files in this pull request and generated 5 comments.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated 7 comments.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated 1 comment.

@sephynox sephynox requested a review from Copilot January 27, 2026 23:46
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 16 out of 17 changed files in this pull request and generated 6 comments.

@sephynox sephynox marked this pull request as ready for review January 28, 2026 00:30
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this file changing while package.json is not in this changeset ?

* Interface for atomic storage operations.
* Provides the same operations as StorageBackend but within an atomic scope.
*/
export interface StorageAtomicInterface {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like it will be a VERY difficult interface to implement for a real world provider, and I'm not sure we need an atomic interface for multiple storage operations.

What's the thinking behind requiring every provider to take a snapshot of what a storage system looks like at a given moment so that a search doesn't return any newer documents, for example ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quota management was the original issue. You could exceed or be prevented from using the limit under certain circumstances.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like there should be tests for this then ?


// Validate encrypted container if needed
if (needsAnchorDecryption) {
if (!anchorAccount?.hasPrivateKey) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could check this at startup

};

// GET /api/object - Retrieve an object
routes['GET /api/object'] = async function(_params, _postData, _headers, url) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally the resource would be part of the URL pathname and not as query parameter

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

routeMatch does not support wildcards. Suggestions?

const signatureBuffer = Buffer.from(signature, 'base64');
const messageBuffer = Buffer.from(message, 'utf-8');

const validSig = ownerAccount.verify(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use the utils/signing library ?

}

// Decrypt using anchor account
if (!anchorAccount?.hasPrivateKey) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this could be checked at startup instead of at random places throughout the code

}

// Decode base64 data to get actual stored object size
const data = Buffer.from(request.data, 'base64');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would the data we're asking to be stored come to us in base64 encoding ? It should be the body of the request which can be encoded safely in a more efficient way

}) => Promise<StorageObjectMetadata>;
}

type ClientTestFn = (ctx: ClientTestContext) => Promise<void>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not starving for space, we can use real words like context and Function when writing a type that is meant to describe what things are...

Comment on lines 89 to 90
const makePath = (relativePath: string): StoragePath =>
makeStoragePath(account.publicKeyString.get(), relativePath);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const makePath = (relativePath: string): StoragePath =>
makeStoragePath(account.publicKeyString.get(), relativePath);
const makePath = function(relativePath: string): StoragePath {
return(makeStoragePath(account.publicKeyString.get(), relativePath));
};

/**
* Helper to run a test with a fully configured client and provider.
*/
async function withClient(seed: string | ArrayBuffer, fn: ClientTestFn, options: WithClientOptions = {}): Promise<void> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to describe the method we're calling as fn -- for one, we already know its type and shape an Fn (or as some might say, a function) from the type it can have a better name, one meant for humans who are the only ones who care about names.


// #endregion

describe('Storage Client - Provider Discovery', () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
describe('Storage Client - Provider Discovery', () => {
describe('Storage Client - Provider Discovery', function() {

const path = makePath('test.txt');

// PUT
const putResult = await provider.put(path, testData, {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the client should have a beginSession() method with a given provider which lets you have a session that has some state like:

  • account
  • working directory
  • default visibility (optional)

@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants