Skip to content

Conversation

yudytskiy
Copy link

Prometheus is good, but it uses file operations on each middleware hook; that might become a bottleneck.

Suggested middleware uses similar metrics, but it keeps statistics in memory and may be queried by the simple task when needed.

As soon as a middleware instance starts within a worker process, requesting stats requires getting stats from each worker middleware instance.

I suggest starting an additional worker task inside middleware for a special pub-sub stat broker, so kiqing its task leads to execution on each main worker process, gathering all results together.

This pull request is just an idea implementation and includes:

  • metrics classes
  • middleware using metrics
  • test for metrics
  • demo example stats script

I had to bump up the required mypy and black minimal versions to support modern generic syntax.

So I just share the idea of request-oriented statistics gathering via pub-sup broker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant