Skip to content

Conversation

@harunzengin
Copy link
Contributor

Closes #383

Since we're catching the :noproc errors on Xandra.Connection level, we don't need to catch them in Xandra.Cluster.

@harunzengin harunzengin changed the title Catch :noproc errors in Connection and make sure that client doesn't … Catch :noproc errors in Xandra.Connection Jun 13, 2025
@harunzengin harunzengin changed the title Catch :noproc errors in Xandra.Connection Catch :noproc errors in Xandra.Connection Jun 13, 2025

%Xandra.Error{} = error ->
{{:error, error}, Map.put(metadata, :reason, error)}
try do
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is wrapping the whole case into a try block. I think that's not right, as it would hide exits that could happen anywhere.

Instead, we should wrap just the :gen_statem.call/2 into try. That would also mean significantly more readable code, as this starts to be a really convoluted nested piece of code.

Basically, do this:

case gen_statem_call_trapping_noproc(conn_pid, {:checkout_state_for_next_request, req_alias}) do
  # ...

  # New clause:
  {:error, :noproc} ->
     # ...
end

# Then:
defp gen_statem_call_trapping_noproc(pid, call) do
  :gen_statem.call(pid, call)
catch
  :exit, {:noproc, _} ->
    {:error, :noproc}
end

Does that make sense? It will also, incidentally, make the diff a lot slimmer 🙃

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes a lot of sense

@harunzengin harunzengin requested a review from whatyouhide June 23, 2025 14:53
@whatyouhide
Copy link
Owner

Fantastic work 💟

@whatyouhide whatyouhide merged commit d967b72 into whatyouhide:main Jun 24, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Receiving :DOWN message after network failure on node

2 participants