⚡️ Speed up function encode_query by 12%
#111
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📄 12% (0.12x) speedup for
encode_queryinskyvern/client/core/query_encoder.py⏱️ Runtime :
9.27 milliseconds→8.24 milliseconds(best of206runs)📝 Explanation and details
The optimization achieves a 12% speedup by restructuring type checking logic to minimize redundant
isinstance()calls and reduce branch prediction overhead.Key optimizations applied:
Reordered type checking: Moved
dictcheck first since it's more common in query structures, avoiding the expensive protocol lookup for pydantic models in the common case.Eliminated redundant compound conditions: Split the original
isinstance(query_value, pydantic.BaseModel) or isinstance(query_value, dict)into separateelifbranches, reducing the number of type checks when the first condition fails.Streamlined list processing: In the list handling branch, removed nested conditional logic and directly handled
dictandpydantic.BaseModelcases separately, eliminating duplicateisinstance()calls within the loop.Direct method calls: For pydantic models in lists, directly call
.dict(by_alias=True)instead of storing in an intermediate variable, reducing memory allocations.Why this leads to speedup:
isinstance()calls are relatively expensive in Python, especially for protocol-based types like pydantic modelsImpact on workloads:
The function is called from HTTP client methods (
request()andstream()) for encoding query parameters in API calls. Since these are hot paths that may process many requests with complex nested data structures, the 12% improvement becomes significant at scale. The optimization particularly benefits workloads with:The optimization maintains identical behavior while significantly improving performance for dictionary-heavy query encoding scenarios.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-encode_query-mira9h0dand push.