Skip to content

Conversation

@LeoPatOZ
Copy link
Collaborator

@LeoPatOZ LeoPatOZ commented Nov 4, 2025

Towards #6

Timeout,
#[error("RPC call failed after exhausting all retry attempts: {0}")]
RetryFailure(Arc<RpcError<TransportErrorKind>>),
RpcError(Arc<RpcError<TransportErrorKind>>),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clearer as this error might not happen when 'retrying'

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if it still makes sense to have RetryFailure?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe doesn't serve any purpose

@0xNeshi 0xNeshi self-requested a review November 5, 2025 09:42
@0xNeshi
Copy link
Collaborator

0xNeshi commented Nov 5, 2025

Removing approval due to this edge case:

  • set WS primary provider and HTTP fallback
  • initiate subscription
  • PP fails for some reason, switch to FP
  • FP doesn't support pubsub
  • RobustProvider returns last error it encountered, which is FP's error that provider doesn't support pubsub
    • the actual error should just be whatever PP's last error was

Comment on lines 296 to 297
.map_err(Error::from)?
.map_err(Error::from)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realised a slight mistake in the multi provider PR - any RPC error was being returned as a Timeout error

@LeoPatOZ LeoPatOZ marked this pull request as ready for review November 6, 2025 12:02
@LeoPatOZ LeoPatOZ requested a review from pepebndc as a code owner November 6, 2025 12:02
Comment on lines 241 to 262
for (idx, provider) in self.providers.iter().enumerate() {
// Skip providers that don't support pubsub if required
if require_pubsub && !Self::supports_pubsub(provider) {
if idx == 0 {
info!("Primary provider doesn't support pubsub, skipping");
} else {
info!("Fallback provider {} doesn't support pubsub, skipping", idx);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems having the primary provider in a separate field is useful after all... No matter, leave things as-is for now.

@0xNeshi 0xNeshi self-requested a review November 6, 2025 12:12
Comment on lines 505 to 516
// Verify that the WS connection is actually dead before proceeding
// Retry until it fails to ensure the connection is truly dead
let mut retries = 0;
loop {
let verification = ws_provider.get_block_number().await;
if verification.is_err() {
break;
}
sleep(Duration::from_millis(50)).await;
retries += 1;
assert!(retries < 50, "WS provider should have failed after anvil dropped");
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this truly necessary? Won't WS provider fail immediately with BackendGone?

Copy link
Collaborator Author

@LeoPatOZ LeoPatOZ Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't have it initially but the test was super flaky on the CI. There just wasn't an error when calling subscribe after the drop , which indicated that the WS provider was still up. I'm not sure of another way to kill the anvil node :/

Copy link
Collaborator

@0xNeshi 0xNeshi Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then instead relying on dropping Anvil node, you could use alloy::providers::mock::Asserter to force an error on provider_1

Won't work, Asserter doesn't support pubsub

@0xNeshi 0xNeshi merged commit 655124a into multiple-providers Nov 7, 2025
10 checks passed
@0xNeshi 0xNeshi deleted the pubsub-checks branch November 7, 2025 08:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants