feat: add get_feature_flags for bulk flag evaluation with events#534
feat: add get_feature_flags for bulk flag evaluation with events#534
Conversation
Add a new get_feature_flags(keys, distinct_id, ...) method for evaluating a known subset of feature flags in one bulk pass while still emitting $feature_flag_called events per resolved flag. Locally-evaluated flags reuse the poller's cached definitions; keys that can't be resolved locally fall through to a single remote /flags call with flag_keys_to_evaluate. Event dedup uses the existing distinct_ids_feature_flags_reported cache so the single-flag and bulk paths share one source of truth. Also adds a send_feature_flag_events option to get_all_flags and get_all_flags_and_payloads for opt-in per-flag event emission. Defaults to False to preserve existing behavior. Ports PostHog/posthog-js#3447 to posthog-python. Generated-By: PostHog Code Task-Id: b54693f6-498d-4193-bbc2-b9a97e07e69d
posthog-python Compliance ReportDate: 2026-04-23 22:23:46 UTC ✅ All Tests Passed!30/30 tests passed Capture Tests✅ 29/29 tests passed View Details
Feature_Flags Tests✅ 1/1 tests passed View Details
|
|
| disable_geoip=None, # type: Optional[bool] | ||
| device_id=None, # type: Optional[str] | ||
| flag_keys_to_evaluate=None, # type: Optional[list[str]] | ||
| send_feature_flag_events=False, # type: bool |
There was a problem hiding this comment.
iirc there was rfc to kill this flag due to its side effect
https://github.com/PostHog/requests-for-comments-internal/pull/1020 i think
so before adding this flag to more sdks, should we double check if we actually want to do this?
There was a problem hiding this comment.
so my interpretation of that SDK is to remove it from methods like e.g. capture, since we really don't need to send flag data for non-flag calls. Me adding to to get_all_flags here was to make it so that people could still use get_all_flags and send the flag values as events. what I noticed this week when talking to a customer was that they basically call get_all_flags once as part of the initial page load, save those values into their own app memory, and then use them throughout. This would make it so that they don't have to refactor their code to use my proposed new get_feature_flags() method, and could instead just add a field to their existing code.
but I read that RFC and it's a much saner approach IMO, so maybe rather than me adding this new method that follows the existing patterns, I should do the heavier lift and just implement the RFC. The reason I'm hesitant is because it turns this into a heavier lift for the customer – rather than them just bumping the SDK and adding this new method, they have to bump to a bigger version and deprecate their other methods everywhere. Maybe in LLM world, that's just as easy, but I wanted to be cognizant of that.
|
closing in favor of a more principled approach, see https://posthog.slack.com/archives/C07Q2U4BH4L/p1777054243923129 |
Summary
Add a new
get_feature_flags(keys, distinct_id, ...)method for evaluating a known subset of feature flags in one bulk pass while still emitting$feature_flag_calledevents per resolved flag. Also adds asend_feature_flag_eventsoption to the existingget_all_flags/get_all_flags_and_payloadsmethods for opt-in per-flag event emission.The motivation (per the original PR): customers loading a page server-side often need a specific ~10 flags.
get_feature_flagper key is N network calls;get_all_flagsis one call but emits no$feature_flag_calledevents and evaluates flags the caller doesn't care about. The new method is the sweet spot: at most one network round trip, per-flag usage events.Ports PostHog/posthog-js#3447 from
posthog-nodetoposthog-python. Supersedes #532 (closed because its branch was based on a stale commit that predated several unsigned release commits on main, which blocked a clean rebase-and-push under the repo's signature policy).Changes
get_feature_flagsmethod (posthog/client.py): evaluates a subset of flags in bulk. Locally-evaluated flags reuse the poller's cached definitions; keys that can't be resolved locally fall through to a single remote/flagscall withflag_keys_to_evaluate. Threadsdevice_idand participates in the same error handling (quota, timeout, connection, api-error, unknown) + stale-cache fallback as_get_feature_flag_result.distinct_ids_feature_flags_reportedcache so single-flag and bulk paths share one source of truth.send_feature_flag_eventsoption onget_all_flags/get_all_flags_and_payloads. Defaults toFalseto preserve existing behavior.posthog/__init__.py): addsposthog.get_feature_flags(...)and threadssend_feature_flag_eventsthrough the module-levelget_all_flags/get_all_flags_and_payloadswrappers.posthog/test/test_feature_flags.py): newTestGetFeatureFlagsBulkclass with 9 tests covering remote-only evaluation, local-only evaluation, hybrid scenarios, event deduplication,send_feature_flag_events=Falsesuppression, missing-key handling (with$feature_flag_error=flag_missing), and the newget_all_flagsoption.Test plan
uv run pytest posthog/test/test_feature_flags.py::TestGetFeatureFlagsBulk— 9/9 passuv run pytest posthog/test/ --ignore=posthog/test/ai— 459/459 pass (no regressions;send_feature_flag_events=Falsedefault preserves historical behavior)uv run mypy posthog/client.py posthog/__init__.py— no new errors vs. baselineuvx ruff format— cleanCreated with PostHog Code