-
Notifications
You must be signed in to change notification settings - Fork 15
Split protocol into 3 separate services #230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split protocol into 3 separate services #230
Conversation
|
||
// TODO: Rename to Replication | ||
message Peering { | ||
message Replication { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I'm looking into this as a reader, I think the message should be called Raft
rather than Replication
. Similarly, RaftService
should have one rpc endpoint, raft
.
It's because the message that we exchange are not just replication message, but any messages as defined by the Raft protocol. They can either be "request vote", "append entries" (ie., the message responsible for replication), and "cluster conf change" (ie., add / remove node).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm okay. I thought that we considered extending this protocol and adding more methods. Your point about vote requests, etc. is reasonable, I'll rename it now.
|
||
import "proto/typedb_service/server.proto"; | ||
|
||
message Registration { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When does this message gets sent again, and by whom to whom? The reason I'm asking is because now I feel the name Registration
might not be entirely describe what the message does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A leader server requests registration of a secondary replica from a server. A server receives this Registration
request and either replies positively or rejects the request (either because of some logical validations or just because there is no suitable server to handle this request).
So the request is Registration
. I (leader) want you (secondary) to register. I'm open to other names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does the "registration" relate to the "add node conf change" raft message? In terms of terminology, they're quite synonymous even though the latter is where the bulk of the logic for adding a server takes place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We send the registration request to the node candidate. If the candidate approves it, we proceed to submit the "add node conf change" action.
It's called Registration
based on the Servers.Register.Req
from TypeDBService
. Can be alternatively called RegistrationApprove
or smth
55ed142
into
typedb:cluster-support-feature-branch
## Product change and motivation Allow optionally preparing server state objects before creating TypeDB Servers instead of using builders. Introduce `server_gets`. Update the server to align with the updated TypeDB protocol with [split services](typedb/typedb-protocol#230). ## Implementation Update protocol. Expose a separate `initialise_storage` server's method to pre-init the storage with the server id for clustered server state preparation. Instead of using `Arc<Box<dyn ServerState>>` everywhere, just use `Arc<dyn ServerState>`: * It was redundant * It was preventing us from casting dynamic objects to `ClusteredServerState`s and vice versa because of the two conflicting memory allocation abstractions. Expose all config fields as `pub`.
Release notes: usage and product changes
Split the protocol into 3 services:
Implementation
Split the old RaftPeeringService into TypeDBClusteringService and RaftService.
Move separate services and their implementation files into separate subpackages.
Rename methods. Introduce
ServersGet
method for single server status retrieval. Introduceregistration
method to verify the replica registration operation before submitting it to Raft.