Skip to content

Commit 338289b

Browse files
committed
another readme update
1 parent f4aeee4 commit 338289b

File tree

8 files changed

+38
-37
lines changed

8 files changed

+38
-37
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ __pycache__
1414
.vscode*
1515
.coverage
1616
coverage.xml
17+
test/poke.py
18+
compare/*
1719

1820
!.gitkeep
1921
!/.gitignore

README.md

Lines changed: 24 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,25 @@
55

66
Push metrics from your regular and/or long-running jobs to existing Prometheus/VictoriaMetrics monitoring system.
77

8-
:no_entry: not tested in real environment yet!
8+
Currently supports pushes directly to VictoriaMetrics via UDP and HTTP using InfluxDB line protocol as [described here](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html?highlight=telegraf#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf).
99

10-
Currently supports pushes directly to VictoriaMetrics via UDP and HTTP using InfluxDB line protocol as [described here](https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html?highlight=telegraf#how-to-send-data-from-influxdb-compatible-agents-such-as-telegraf), and into StatsD/statsd-exporter in StatsD format via UDP ([not ideal](https://github.com/prometheus/statsd_exporter#with-statsd)).
10+
For pure Prometheus setups, pushes into StatsD/statsd-exporter in StatsD format via UDP are supported ([see exporter docs](https://github.com/prometheus/statsd_exporter#with-statsd)). Prometheus and StatsD metric types are not fully compatible, so currenly all metrics become StatsD gauges, but `rate`, `increase`, `histogram_quantile` and other PromQL functions produce same results as if types never changed.
1111

12-
## Default labelvalues
12+
Install it via pip:
13+
14+
```sh
15+
pip install prometheus-push-client
16+
```
17+
18+
## Metrics
19+
20+
This library uses `prometheus-client` metric implementation, but adds with some minor tweaks.
21+
22+
### Separate registry
23+
24+
New metric constructors use separate `PUSH_REGISTRY` as a default, not to interfere with other metrics already defined and monitored in existing projects.
25+
26+
### Default labelvalues
1327

1428
With regular prometheus_client, defaults may be defined for either _none_ or _all_ the labels (with `labelvalues`), but that's not enough.
1529

@@ -18,6 +32,9 @@ We probably want to define _some_ defaults, like `hostname`, or more importantly
1832
Following example shows how to use defaults, and how to override them if necessary.
1933

2034
```python
35+
import prometheus_push_client as ppc
36+
37+
2138
counter1 = ppc.Counter(
2239
name="c1",
2340
labelnames=["VictoriaMetrics_AccountID", "host", "event_type"],
@@ -40,13 +57,15 @@ counter1.labels("non-default", "login").inc()
4057

4158
Metrics with no labels are initialized at creation time. This can have unpleasant side-effect: if we initialize lots of metrics not used in currently running job, background clients will have to push their non-changing values in every synchronization session.
4259

43-
To avoid that we'll have to properly isolate each task's metrics, which can be impossible or rather tricky, or we can create metrics with default, non-changing labels (like `hostname`). Such metrics will be initialized on fisrt use (inc), and we'll be pusing only those we actually used!
60+
To avoid that we'll have to properly isolate each task's metrics, which can be impossible or rather tricky, or we can create metrics with default, non-changing labels (like `hostname`). Such metrics will be initialized on fisrt use (inc), and we'll be pushing only those we actually used.
4461

4562
## Background clients
4663

4764
Background clients spawn synchronization jobs "in background" (meaning in a thread or asyncio task) to periodically send all metrics from `ppc.PUSH_REGISTRY` to the destination.
4865

49-
:warning: background clients will attempt to stop gracefully, synchronizing registry "one last time" after job exits or crashes. This _may_ mess up sampling aggregation, I'm not sure yet.
66+
Clients will attempt to stop gracefully, synchronizing registry "one last time" after job exits or crashes. Sometimes this _may_ mess up Grafana sampling, but the worst picture I could atrifically create looks like this:
67+
68+
![graceful push effect](./docs/img/graceful_stop_effect01.png)
5069

5170
Best way to use them is via decorators. These clients are intended to be used with long running, but finite tasks, which could be spawned anywhere, therefor not easily accessible by the scraper. If that's not the case -- just use "passive mode" w/ the scraper instead.
5271

docs/img/graceful_stop_effect01.png

9.07 KB
Loading

prometheus_push_client/formats/influx.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33

44
class InfluxFormat(BaseFormat):
5+
"""
6+
As descibed at:
7+
https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/
8+
"""
59

610
FMT_SAMPLE = "{sample_name}{tag_set} {measurement_name}={value}{timestamp}"
711

prometheus_push_client/formats/statsd.py

Lines changed: 2 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
from collections import defaultdict
2-
31
from prometheus_push_client.formats.base import BaseFormat
42

53

@@ -11,42 +9,19 @@ class StatsdFormat(BaseFormat):
119
1210
No "realtime" types supported yet.
1311
"""
14-
# TODO: support statsd native "sets", "timers" and "histograms" in FG mode
12+
# TODO: support statsd native "sets", "timers" and "histograms" in FG mode?
1513

1614
FMT_DATAPOINT = "{measurement}{tag_set}:{value}|{dtype}" # influx-style tags
1715

18-
DTYPES = {
19-
"gauge": defaultdict(lambda: "g"),
20-
"counter": {
21-
"total": "c",
22-
"created": "g",
23-
},
24-
"summary": {
25-
"sum": "c",
26-
"count": "c",
27-
"created": "g",
28-
},
29-
"histogram": {
30-
"bucket": "c",
31-
"sum": "c",
32-
"count": "c",
33-
"created": "g",
34-
}
35-
36-
# TODO: info, enum
37-
}
38-
3916

4017
def format_sample(self, sample, metric):
4118
# TODO: gauges reset?
42-
4319
measurement_name = sample.name
4420

4521
chunks = measurement_name.rsplit("_", 1)
4622
suffix = chunks[-1] if len(chunks) > 1 else None
4723

48-
# dtype = self.DTYPES[metric.type][suffix]
49-
dtype = "g" # TODO: omfg! everything behaves like gauge
24+
dtype = "g" # everything behaves like gauge on statsd-side
5025
value = sample.value
5126

5227
tag_set = ""

prometheus_push_client/transports/udp.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ def stop(self):
1717
self.transport.close()
1818

1919
def pack_datagrams(self, iterable):
20+
# TODO: if first line > mtu?
2021
datagram = []
2122
datagram_size = 0
2223
for line in iterable:

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
readme_lines = []
1313
with open('README.md') as fd:
1414
readme_lines = filter(None, fd.read().splitlines())
15-
readme_lines = list(readme_lines)[:5]
15+
readme_lines = list(readme_lines)[:10]
1616
readme_lines.append('Read more at [github page](%s).' % github_url)
1717
readme = '\n\n'.join(readme_lines)
1818

test/test_online/test_victoria.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ def _test():
7474
with pytest.raises(ZeroDivisionError):
7575
_test()
7676

77-
time.sleep(2.0) # let them sync
77+
time.sleep(3.0) # let them sync
7878

7979
found_after = export(cfg)
8080
count_after = count_samples(found_after, counter1._name)
@@ -103,7 +103,7 @@ async def _test():
103103
with pytest.raises(ZeroDivisionError):
104104
await _test()
105105

106-
await asyncio.sleep(2.0) # let them sync
106+
await asyncio.sleep(3.0) # let them sync
107107

108108
found_after = export(cfg)
109109
count_after = count_samples(found_after, counter1._name)
@@ -131,7 +131,7 @@ def _test():
131131
with pytest.raises(ZeroDivisionError):
132132
_test()
133133

134-
time.sleep(2.0) # let them sync
134+
time.sleep(3.0) # let them sync
135135

136136
found_after = export(cfg)
137137
count_after = count_samples(found_after, counter1._name)
@@ -160,7 +160,7 @@ async def _test():
160160
with pytest.raises(ZeroDivisionError):
161161
await _test()
162162

163-
await asyncio.sleep(2.0) # let them sync
163+
await asyncio.sleep(3.0) # let them sync
164164

165165
found_after = export(cfg)
166166
count_after = count_samples(found_after, counter1._name)

0 commit comments

Comments
 (0)