-
Notifications
You must be signed in to change notification settings - Fork 337
Open
Labels
Description
What did you do
Getting one "instance" label which has no value in exporter metrics as below -
# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
# TYPE jvm_memory_pool_allocated_bytes_total counter
jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 2.398098168E10
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 2.7943424E7
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 1.5028992E7
jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 3559888.0
jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 3.3488968E7
jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 2.40252464E8
jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 2.89199688E8
jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1534848.0
# HELP jvm_info VM version info
# TYPE jvm_info gauge
jvm_info{runtime="OpenJDK Runtime Environment",vendor="Eclipse Adoptium",version="21.0.4+7-LTS",} 1.0
# HELP cwagent_logical_disk_free_space_average CloudWatch metric CWAgent LogicalDisk % Free Space Dimensions: [ImageId, InstanceId, InstanceType, instance, objectname] Statistic: Average Unit: None
# TYPE cwagent_logical_disk_free_space_average gauge
cwagent_logical_disk_free_space_average{job="cwagent",instance="",instance="C:",instance_id="i-xxxxx",image_id="ami-xxxxx",objectname="LogicalDisk",instance_type="t3.medium",} 78.53746032714844 1758697320000
What did you expect to see?
I don't want that blank "instance" label to come in exporter metrics as it is causing issue when prometheus scraping this metric it is not taking "instance" label in which actual value for drive name is present as "C" next to this blank "instance" label.
See below when prometheus scape this metric it drops "instance" label in which drive name is present -
{Account="xxx", Region="ap-south-1", __name__="cwagent_logical_disk_free_space_average", cluster_name="xxx", exported_job="cwagent", image_id="ami-xxx", instance_id="i-xxx", instance_type="m5a.xlarge", job="cloudwatch-exporter", objectname="LogicalDisk", prometheus="monitoring/xxx-prometheus-kube-prometheus", prometheus_replica="prometheus-low-ss-05-prometheus-kube-prometheus-0"}
Environment
- Exporter version:
0.16.0 - Operating system & architecture: EKS
- Running in containers? y
- Using the official image? y
Exporter configuration file
expand
image:
repository: xxx/prometheus-cloudwatch-exporter
tag: 0.16.0
pullSecrets:
- name: "xxx"
service:
labels:
logging: "true"
app: as1-cloudwatch-exporter
bu: corp
svc: qa-prometheus-cloudwatch-exporter
env: qa
pod:
annotations:
monitoring: "true"
email: 'vaibhav.ingulkar@xxxx.com'
labels:
app: as1-cloudwatch-exporter
logging: "true"
bu: corp
svc: qa-prometheus-cloudwatch-exporter
env: qa
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 450m
memory: 450Mi
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxx:role/qa-prom-cloudwatch-exporter-role"
config: |-
# This is the default configuration for prometheus-cloudwatch-exporter
region: ap-south-1
delay_seconds: 60
period_seconds: 3600
metrics:
- aws_namespace: CWAgent
aws_metric_name: LogicalDisk % Free Space
aws_dimensions: [ImageId, InstanceId, InstanceType, instance, objectname ]
aws_statistics: [Average]
aws_tag_select:
resource_type_selection: ec2:instance
resource_id_dimension: InstanceId
tolerations:
- effect: NoSchedule
key: type
operator: Equal
value: system
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- system
priorityClassName: "system-cluster-critical"