Skip to content

feat: make model metrics endpoints configurable #1000

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nayihz
Copy link
Contributor

@nayihz nayihz commented Jun 17, 2025

fix: #16

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 17, 2025
@k8s-ci-robot k8s-ci-robot requested a review from Jeffwan June 17, 2025 10:10
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: nayihz
Once this PR has been reviewed and has the lgtm label, please assign danehans for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jun 17, 2025
Copy link

netlify bot commented Jun 17, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit d86effa
🔍 Latest deploy log https://app.netlify.com/projects/gateway-api-inference-extension/deploys/6854b9cdb5b6cb000878ff17
😎 Deploy Preview https://deploy-preview-1000--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 17, 2025

I have some doubts about adding additional fields to InferencePool.
there are alternative ways to achieve the same goal, like using command line args.
changes to CRDs should be discussed and get broad agreement using proposals.

cc @kfswain @ahg-g @robscott @danehans @elevran

/hold for others to comment.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 17, 2025
@nayihz nayihz force-pushed the feat_config_metric branch from ffad486 to f57478d Compare June 17, 2025 10:34
@nayihz
Copy link
Contributor Author

nayihz commented Jun 17, 2025

@nirrozenbaum , thanks for you advise. I think you are right.
Command line args or environment variable, which way do you prefer?

@nirrozenbaum
Copy link
Contributor

@nirrozenbaum , thanks for you advise. I think you are right. Command line args or environment variable, which way do you prefer?

@nayihz I would start with command-line args with default values (the existing ones).
we can always iterate if needed.

@nayihz nayihz force-pushed the feat_config_metric branch from f57478d to 9fa17f7 Compare June 17, 2025 12:40
@elevran
Copy link
Contributor

elevran commented Jun 17, 2025

@nayihz @nirrozenbaum this introduces a fixed endpoint for all model servers in the pool.
Would it make sense to use the Prometheus-formatted annotations as the source of truth when present and fallback to the configuration when missing?
For example:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080"

@nirrozenbaum
Copy link
Contributor

nirrozenbaum commented Jun 17, 2025

@nayihz @nirrozenbaum this introduces a fixed endpoint for all model servers in the pool. Would it make sense to use the Prometheus-formatted annotations as the source of truth when present and fallback to the configuration when missing? For example:

apiVersion: v1
kind: Pod
metadata:
  name: my-app
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080”

@elevran we already have a fixed endpoint, so this PR is not introducing it :). the intention was to make that endpoint configurable.
but your suggestion makes sense as an improvement.

@nirrozenbaum
Copy link
Contributor

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 17, 2025
@nayihz nayihz force-pushed the feat_config_metric branch from 9fa17f7 to 7466a28 Compare June 18, 2025 01:34
@nayihz nayihz force-pushed the feat_config_metric branch 2 times, most recently from 528d53f to 937f686 Compare June 19, 2025 13:33
@nayihz nayihz force-pushed the feat_config_metric branch from 937f686 to d86effa Compare June 20, 2025 01:30
@@ -110,6 +110,10 @@ var (
"vllm:lora_requests_info",
"Prometheus metric for the LoRA info metrics (must be in vLLM label format).")

modelServerMetricsPort = flag.Int("modelServerMetricsPort", 0, "Port to scrape metrics from pods. "+
"Default value will be set to InferencePool.Spec.TargetPortNumber if not set.")
modelServerMetricsPath = flag.String("modelServerMetricsPath", "/metrics", "Path to scrape metrics from pods")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to be a pain, but we may want to consider making these env vars so that they can work more cleanly with our helm charts via https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/config/charts/inferencepool#install-with-custom-environment-variables

We could add support for flags as well, but this is an already established path.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When user sets both env variables and the flags at the same time, which one has higher priority?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When both command-line flags and environment variables are used to configure the same setting, the standard behavior is to prioritize command-line flags. This is because command-line flags represent explicit, while environment variables often represent more general or persistent configuration. Viper is a good reference implementation of this precedence.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTAL at https://gist.github.com/danehans/7eae063e205141323ffae5428acb736a for an example implementation snippet.

@kfswain
Copy link
Collaborator

kfswain commented Jun 24, 2025

looks good for the most part, added a comment about switching to env vars to have better interfacing with our helm chart

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Expose baseline algorithm parameters as configurable
7 participants