Skip to content

Conversation

@tjungblu
Copy link
Contributor

/hold

just here for CI runs and cluster bot builds

@openshift-ci openshift-ci bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Oct 27, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link

coderabbitai bot commented Oct 27, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tjungblu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 27, 2025
@tjungblu tjungblu force-pushed the RFE-7051 branch 3 times, most recently from 5920b8e to 689045c Compare October 29, 2025 15:58
@tjungblu
Copy link
Contributor Author

tjungblu commented Oct 30, 2025

Some quick benchmark results using:

etcdctl check perf --load="xl"

This is running against the normal etcd:

export ETCDCTL_ENDPOINTS="https://localhost:2379"
etcdctl check perf --load="xl" 
FAIL: Throughput too low: 8097 writes/s
PASS: Slowest request took 0.077356s
PASS: Stddev is 0.004320s
FAIL

This is running against localhost in-memory:

export ETCDCTL_ENDPOINTS="https://localhost:20379"
etcdctl check perf --load="xl"
... 
FAIL: Throughput too low: 11476 writes/s
PASS: Slowest request took 0.068861s
PASS: Stddev is 0.002417s

Now this is pretty crappy, let's try some tuning:

unsafe-no-fsync=true

FAIL: Throughput too low: 11407 writes/s
PASS: Slowest request took 0.061991s
PASS: Stddev is 0.001785s

seems has no effect

--backend-batch-limit=50000 and --backend-batch-interval=1m up from 10000 / 100ms

FAIL: Throughput too low: 11438 writes/s
PASS: Slowest request took 0.082044s
PASS: Stddev is 0.002382s

also no real effect, leading me to believe this benchmark is not really disk bound in the first place.

Checking the allocation route with:

--backend-bbolt-freelist-type=array instead of the hashmap implementation yields

FAIL: Throughput too low: 11630 writes/s
PASS: Slowest request took 0.066573s
PASS: Stddev is 0.001428s

Performs only slightly better.


After CPU profiling, some interesting findings:

  • most of the time is spent on netFD.write syscall, flushing the response of etcd to the network
  • there is significant contention in the .Put and processInternalRaftRequestOnce call chain,
  • fsync just is 0.9%, also matching what we saw above
  • the apply loop is ~6%, matching the results when changing the batch settings

Unfortunately the grpc options for buffer sizes (R+W are at 32K each) need recompilation, so I won't be able to max this out any further today.

Comment on lines 195 to 196
--- OR via SVC, as done below ---
- "/events#https://events-etcd.openshift-etcd.svc:20379"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this requires dnsPolicy=ClusterFirstWithHostNet on the kube-apiserver static pods

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

This PR contains a dedicated in-memory etcd deployment that will run on
one control plane host and configures the kube-apiserver to send events
to it.

Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant