Skip to content

feat: add multipart#118

Merged
tatiesmars merged 5 commits intomainfrom
feature/add-multipart
Jan 12, 2026
Merged

feat: add multipart#118
tatiesmars merged 5 commits intomainfrom
feature/add-multipart

Conversation

@tatiesmars
Copy link
Copy Markdown
Contributor

No description provided.

@tatiesmars tatiesmars force-pushed the feature/add-multipart branch from e37381b to 1d989dd Compare December 15, 2025 10:30
Copy link
Copy Markdown
Contributor

@shuning-auki shuning-auki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is not new in this PR, but I just noticed, existing upload endpoints rolls back all changes when there is an error, but combining upload and create together makes it confusing which part has actually failed and how to retry. Same problem with new multipart endpoints, now each upload is one transaction, it makes it even harder for clients to know which one should be retried if something failed

tatiesmars and others added 2 commits December 16, 2025 08:26
Co-authored-by: shuning <shuning@aukilabs.com>
Co-authored-by: shuning <shuning@aukilabs.com>
@tatiesmars tatiesmars requested a review from jb-san December 24, 2025 05:33
@jb-san
Copy link
Copy Markdown

jb-san commented Dec 24, 2025

I think the multipart in it self works fine from my tests, when things fail it is because of infra configuration in the environment.
we need to figure out good basic requirements and make sure we can use our environments

@dataviruset
Copy link
Copy Markdown
Contributor

dataviruset commented Jan 5, 2026

I think the multipart in it self works fine from my tests, when things fail it is because of infra configuration in the environment. we need to figure out good basic requirements and make sure we can use our environments

Improvements have been made to allow for larger ephemeral volumes when domain servers are running without local storage. These improvements has been rolled out in our dev environment so you can test again :)
aukilabs/helm-charts#27
aukilabs/helm-charts#28

Docs here: https://github.com/aukilabs/helm-charts/blob/ab120d6a27a8a5cd540d768971a6c53133ce941b/charts/domain-server/values.yaml#L138-L141

@dataviruset dataviruset requested a review from Copilot January 9, 2026 07:38
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds multipart upload support to handle large data uploads more efficiently. The implementation includes server capability detection via /api/v1/info endpoint, a caching mechanism for server info, and fallback logic to maintain backward compatibility.

Key Changes:

  • Added multipart upload protocol with initiate/upload/complete/abort endpoints
  • Implemented server capability detection with 60-second caching
  • Enhanced upload functions to automatically use multipart for oversized payloads

Reviewed changes

Copilot reviewed 2 out of 3 changed files in this pull request and generated 5 comments.

File Description
core/domain-http/src/domain_data.rs Implements multipart upload protocol, server info caching, and integrates multipart logic into existing upload streams
core/domain-http/Cargo.toml Version bump to 1.5.0

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

use bytes::Bytes;
use futures::{SinkExt, Stream, channel::mpsc, stream::StreamExt};
use reqwest::{Body, Client, Response};
use futures::lock::Mutex;
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using futures::lock::Mutex can cause performance issues in async contexts. Consider using tokio::sync::Mutex instead, which is optimized for async/await and provides better performance in Tokio runtime environments.

Copilot uses AI. Check for mistakes.

struct Batch {
tx: mpsc::Sender<Vec<u8>>,
done: oneshot::Receiver<Result<Vec<DomainDataMetadata>, DomainError>>,
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The oneshot channel is used but not imported. Add use futures::channel::oneshot; or use tokio::sync::oneshot; to the imports.

Copilot uses AI. Check for mistakes.
}

let (name, data_type, existing_id) = match action {
DomainAction::Create { name, data_type } => (name, data_type, None),
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using empty strings for name and data_type in the Update action is unclear. Consider using more explicit placeholder values or adding a comment explaining why these fields are empty for updates.

Suggested change
DomainAction::Create { name, data_type } => (name, data_type, None),
DomainAction::Create { name, data_type } => (name, data_type, None),
// For updates, `name` and `data_type` are determined by the existing record on the server,
// so we intentionally send empty strings and only populate `existing_id`.

Copilot uses AI. Check for mistakes.
Comment on lines +598 to +600

// If we can't determine a meaningful request size limit, keep the existing streaming behavior.
if request_max_bytes <= 0 || !multipart_enabled {
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition request_max_bytes <= 0 treats 0 and negative values the same way. Since request_max_bytes is i64, consider explicitly checking for values less than or equal to zero separately, or document why negative values are treated the same as zero.

Suggested change
// If we can't determine a meaningful request size limit, keep the existing streaming behavior.
if request_max_bytes <= 0 || !multipart_enabled {
let has_meaningful_request_limit = request_max_bytes > 0;
// If we can't determine a meaningful request size limit, keep the existing streaming behavior.
if !has_meaningful_request_limit || !multipart_enabled {

Copilot uses AI. Check for mistakes.
}
Err(e) => {
if is_unsupported_endpoint_error(&e) {
// Endpoint not supported: fall back to single upload (will likely 413).
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions '(will likely 413)' but doesn't explain what should happen if that occurs. Consider clarifying the expected behavior or error handling when the fallback also fails with a 413 status.

Suggested change
// Endpoint not supported: fall back to single upload (will likely 413).
// Endpoint not supported: fall back to single upload (which may fail with 413; in that case the error is propagated to the caller).

Copilot uses AI. Check for mistakes.
@tatiesmars tatiesmars merged commit 440b2d6 into main Jan 12, 2026
6 checks passed
@tatiesmars tatiesmars deleted the feature/add-multipart branch January 12, 2026 01:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants