-
Notifications
You must be signed in to change notification settings - Fork 72
Support Collection.count() and exponential-backoff-and-retry #83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The v2 REST API no longer officially supports the "/count" path. Although empirical evidence would suggest it still exists for "/incidents/count", there is no documentation supporting this, and plenty of support conversations suggesting otherwise. This is especially true for sub-container elements such as Incident.log_entries. This update converts all "count" calls to use the documented "total=true" parameter on a single paginated request. The use of "limit=1" has been explicitly avoided, as this seems to cause HTTP errors with certain incidents.
Partially ported from __init__.py, with the addition of now being exponential (instead of additive), and also now including some randomness (milliseconds) to the sleep. This logic is specifically targeted toward URLError (url timeouts, etc), and HTTPError (429: too many requests).
Instead of one hard-to-read test, incident.log_entries testing is now broken into 2 less-hard-to-read tests
According to documentation, all 5xx errors should be retried https://v2.developer.pagerduty.com/docs/events-api-v2#api-response-codes--retry-logic
| return self.execute_request(request, retry_count + 1) | ||
| else: | ||
| raise | ||
| elif err.code / 100 == 5: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably want integer division here, err.code // 100
|
|
||
| class Requester(object): | ||
| def __init__(self, timeout=10, proxies=None, parse_datetime=False): | ||
| def __init__(self, timeout=10, proxies=None, parse_datetime=False, min_backoff=15.0, max_retries=5): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously the timeout was 10s with no retries; with the retries + backoff it'll be a minimum 75s before the request will give up by default.
I'd prefer to lower the default values (maybe min_backoff=5 and retries=3), and we should provide a way for users to plumb through their own values, here: https://github.com/dropbox/pygerduty/blob/master/pygerduty/v2.py#L611
|
|
Minor changes in order to better support multi-threading..
Collection.count()method to be updated to support when thebase_containeris notNone(ex: when it is anIncident). This looks like it was probably an accidental oversight, as most otherCollectionmethods already take this into consideration./countendpoint anymore. Though empirical evidence suggests they have not removed support for it on the main incidents list, there are multiple documentation/community references that suggest it is no longer supported.PagerDutyinstance per thread) rapidly causes requests to run intoHTTPError(429) and/orURLError(timeout), eliciting the need for an exponential-backoff-and-retry loop in order to not prematurely break out from iterator/generator loops.