Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 26 additions & 1 deletion docs/how-to/exposing-a-metrics-endpoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,31 @@ class ScrapableCharm:
}])
```

The `*` wildcard in the target address is the most common pattern. At scrape
time, Prometheus expands it to one scrape job per unit, each labeled with the
corresponding `juju_unit` topology label.
Comment on lines +61 to +63
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The statement that Prometheus "expands" the * target is inaccurate: Prometheus treats static_configs.targets as literal host:port strings and does not expand *. The per-unit expansion/juju_unit enrichment is done before Prometheus receives the final scrape config (e.g., by the prometheus_scrape library / charm generating the jobs). Please reword to avoid attributing this behavior to Prometheus itself.

Suggested change
The `*` wildcard in the target address is the most common pattern. At scrape
time, Prometheus expands it to one scrape job per unit, each labeled with the
corresponding `juju_unit` topology label.
The `*` wildcard in the target address is the most common pattern. The
`prometheus_scrape` library uses it to generate one scrape target per unit
before Prometheus receives the final scrape configuration, and each target is
labeled with the corresponding `juju_unit` topology label.

Copilot uses AI. Check for mistakes.

If your workload requires explicit hostname or IPs instead of wildcards (for
example, for TLS with strict SNI validation), you can use fully-qualified
addresses as targets:

```python
class ScrapableCharm:
# ...
def __init__(self, *args):
# ...
self.metrics_endpoint_provider = MetricsEndpointProvider(
self,
jobs=[{
"static_configs": [{
"targets": ["myapp-0.myapp-endpoints.mymodel.svc.cluster.local:8080"]
}],
}])
```

Non-wildcard targets whose host matches a known unit address or FQDN are
also enriched with the `juju_unit` label, just like wildcard targets.

## Declaring the relation

As a last step, you need to declare the relation in your charms `metadata.yaml` file.
Expand All @@ -69,4 +94,4 @@ provides:
```

Congratulations! You will now be able to add an integration between your charm
and a scraper!
and a scraper!
19 changes: 15 additions & 4 deletions docs/reference/juju-topology-labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,21 @@ Incidental dashboards coming in from a git repository via the `cos-configuration
When dashboards are forwarded through a `grafana-agent` intermediary, the juju topology labels of the charm of origin are injected (and not `grafana-agent`'s). Any subsequent chaining to additional grafana agent charms would leave the labels intact.

### Charms relating through `cos-proxy`
`cos-proxy` will apply its own topology to the labels, as old LMA-provider units don't implement the more modern interfaces that we would need to add topology to the telemetry.
`cos-proxy` will apply its own topology to the labels, as old LMA-provider units don't implement the more modern interfaces that we would need to add topology to the telemetry.

## Metrics
Metrics are workload-specific and vary from charm to charm.
## Metrics
Metrics are workload-specific and vary from charm to charm.

### Charms relating through `metrics-endpoint`

When a charm relates to `prometheus-k8s`, `opentelemetry-collector-k8s` or `opentelemetry-collector` via the `metrics-endpoint` interface, the `prometheus_scrape` library generates per-unit scrape jobs enriched with all Juju topology labels, including `juju_unit`.

Scrape targets can be specified in two ways:

- **Wildcard targets** (e.g. `*:8080`): The wildcard is expanded into one scrape job per unit, each targeting the unit's address and labeled with the corresponding `juju_unit`.
- **Non-wildcard targets** (e.g. `alertmanager-0.alertmanager-endpoints.svc.cluster.local:9093` or `10.1.14.39:8080`): The library matches each target's host (IP address or FQDN) against known unit addresses. Matched targets produce a per-unit scrape job with `juju_unit`, just like wildcard targets. Targets that cannot be matched to any known unit are grouped in a single job with all other topology labels but without `juju_unit`.

This ensures that metrics from any charmed workload — regardless of how its targets are defined — can be filtered by unit in Grafana dashboards and alert expressions.

### Charms relating through `grafana-agent` (`-k8s` or not)
For `grafana-agent`: any metrics coming from the principal charm will be tagged with the topology of the principal unit. The generic Linux metrics coming from the node exporter will be tagged with the grafana-agent unit topology.
Expand Down Expand Up @@ -80,7 +91,7 @@ In `grafana-agent`, logs scraped from files, such as `/var/log`, will be tagged
In `grafana-agent-k8s`, the charm will not modify the topology.

### Charms relating through `cos-proxy`
`cos-proxy` will apply its own topology to the logs.
`cos-proxy` will apply its own topology to the logs.

## Traces
Any charm can stream traces to Tempo using the `tracing` charm lib. Usually this is done by sending the traces to a `grafana-agent` (soon to be replaced by the OTEL collector), which forwards them to the COS stack. The agent will be responsible to attach to any trace going through it the juju topology of the unit generating them, if known, or else its own (for uncharmed workloads).
Expand Down
Loading