Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .changeset/http-timeouts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
---\n"@googleworkspace/cli": patch\n---\n\nfix(client): add 10s connect timeout to prevent hangs on initial connection
14 changes: 9 additions & 5 deletions src/client.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
use reqwest::header::{HeaderMap, HeaderValue};

const MAX_RETRIES: u32 = 3;
/// Maximum seconds to sleep on a 429 Retry-After header. Prevents a hostile
/// or misconfigured server from hanging the process indefinitely.
const MAX_RETRY_DELAY_SECS: u64 = 60;
const CONNECT_TIMEOUT_SECS: u64 = 10;
const REQUEST_TIMEOUT_SECS: u64 = 600;

pub fn build_client() -> Result<reqwest::Client, crate::error::GwsError> {
let mut headers = HeaderMap::new();
let name = env!("CARGO_PKG_NAME");
Expand All @@ -13,17 +20,14 @@ pub fn build_client() -> Result<reqwest::Client, crate::error::GwsError> {

reqwest::Client::builder()
.default_headers(headers)
.connect_timeout(std::time::Duration::from_secs(CONNECT_TIMEOUT_SECS))
.timeout(std::time::Duration::from_secs(REQUEST_TIMEOUT_SECS))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The global request timeout set with .timeout() might unintentionally break support for large file operations and long-running API streams.

According to the reqwest documentation, this timeout applies to the entire request-response lifecycle. For file downloads, this includes the time to download the complete file, and for streaming APIs, it applies to the entire duration of the stream.

A large file download (e.g., 1GB over a 10 Mbps connection) could take over 10 minutes, causing it to fail. Similarly, any API stream that needs to stay open for more than 10 minutes would be terminated. This seems to contradict the PR's goal of allowing these operations to complete successfully.

Consider removing the global .timeout() from the client builder for now. The connect_timeout is sufficient to prevent hangs on initial connection. A more robust solution for request timeouts would be to apply them on a per-request basis where appropriate, outside of the shared client configuration.

.build()
Comment on lines +23 to 25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The pull request description mentions adding a 120-second request timeout, but this change is missing from the implementation. Adding a request timeout is crucial to prevent the client from hanging indefinitely when a stream stalls, which is one of the goals of this PR.

I've added the request timeout to the client builder. Please also define a REQUEST_TIMEOUT_SECS constant for the value 120 at the top of the file, similar to how CONNECT_TIMEOUT_SECS is defined.

Once this is done, please also update the changeset message in .changeset/http-timeouts.md to reflect the full scope of the changes.

        .connect_timeout(std::time::Duration::from_secs(CONNECT_TIMEOUT_SECS))
        .timeout(std::time::Duration::from_secs(120))
        .build()

.map_err(|e| {
crate::error::GwsError::Other(anyhow::anyhow!("Failed to build HTTP client: {e}"))
})
}

const MAX_RETRIES: u32 = 3;
/// Maximum seconds to sleep on a 429 Retry-After header. Prevents a hostile
/// or misconfigured server from hanging the process indefinitely.
const MAX_RETRY_DELAY_SECS: u64 = 60;

/// Send an HTTP request with automatic retry on 429 (rate limit) responses.
/// Respects the `Retry-After` header; falls back to exponential backoff (1s, 2s, 4s).
pub async fn send_with_retry(
Expand Down
Loading