Skip to content

Split protocol into 3 separate services #230

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

farost
Copy link
Member

@farost farost commented Aug 7, 2025

Release notes: usage and product changes

Split the protocol into 3 services:

  • TypeDBService (contains common TypeDB logic): server, database, user, and transaction management, etc.
  • TypeDBClusteringService (contains TypeDB logic regarding clustering): replica registration, server status.
  • RaftService (contains Raft logic): replication operations.

Implementation

Split the old RaftPeeringService into TypeDBClusteringService and RaftService.

Move separate services and their implementation files into separate subpackages.

Rename methods. Introduce ServersGet method for single server status retrieval. Introduce registration method to verify the replica registration operation before submitting it to Raft.

@farost farost requested a review from lolski August 7, 2025 09:15
@farost farost changed the title Add Peering Ping method Split protocol into 3 separate services Aug 13, 2025
@farost farost marked this pull request as ready for review August 13, 2025 14:36
@farost farost requested a review from haikalpribadi as a code owner August 13, 2025 14:36

// TODO: Rename to Replication
message Peering {
message Replication {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I'm looking into this as a reader, I think the message should be called Raft rather than Replication. Similarly, RaftService should have one rpc endpoint, raft.

It's because the message that we exchange are not just replication message, but any messages as defined by the Raft protocol. They can either be "request vote", "append entries" (ie., the message responsible for replication), and "cluster conf change" (ie., add / remove node).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm okay. I thought that we considered extending this protocol and adding more methods. Your point about vote requests, etc. is reasonable, I'll rename it now.


import "proto/typedb_service/server.proto";

message Registration {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When does this message gets sent again, and by whom to whom? The reason I'm asking is because now I feel the name Registration might not be entirely describe what the message does.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A leader server requests registration of a secondary replica from a server. A server receives this Registration request and either replies positively or rejects the request (either because of some logical validations or just because there is no suitable server to handle this request).
So the request is Registration. I (leader) want you (secondary) to register. I'm open to other names

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the "registration" relate to the "add node conf change" raft message? In terms of terminology, they're quite synonymous even though the latter is where the bulk of the logic for adding a server takes place.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We send the registration request to the node candidate. If the candidate approves it, we proceed to submit the "add node conf change" action.
It's called Registration based on the Servers.Register.Req from TypeDBService. Can be alternatively called RegistrationApprove or smth

@farost farost merged commit 55ed142 into typedb:cluster-support-feature-branch Aug 14, 2025
0 of 2 checks passed
@farost farost deleted the cluster-support-ping branch August 14, 2025 13:47
farost added a commit to typedb/typedb that referenced this pull request Aug 14, 2025
## Product change and motivation
Allow optionally preparing server state objects before creating TypeDB
Servers instead of using builders. Introduce `server_gets`.

Update the server to align with the updated TypeDB protocol with [split
services](typedb/typedb-protocol#230).

## Implementation
Update protocol. 

Expose a separate `initialise_storage` server's method to pre-init the
storage with the server id for clustered server state preparation.

Instead of using `Arc<Box<dyn ServerState>>` everywhere, just use
`Arc<dyn ServerState>`:
* It was redundant
* It was preventing us from casting dynamic objects to
`ClusteredServerState`s and vice versa because of the two conflicting
memory allocation abstractions.

Expose all config fields as `pub`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants