⚡ Bolt: Implement bounded LRU cache for LLM recommendations#15
⚡ Bolt: Implement bounded LRU cache for LLM recommendations#15
Conversation
- Added `functools.lru_cache` to `backend/jules_engine.py` to store styling advice. - Implemented input normalization (rounding height/weight) to improve cache hit rate. - Restricted cache size to 128 entries to prevent memory leaks. - Verified ~100% latency reduction (1.5s -> 0.000s) for subsequent identical requests. - Cleaned up `__pycache__` artifacts. Co-authored-by: LVT-ENG <214667862+LVT-ENG@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a critical performance optimization by implementing a bounded LRU cache for LLM-generated styling recommendations. The primary goal is to drastically reduce response times and API costs associated with repeated generative AI calls for identical user and garment combinations. By also normalizing biometric inputs, the system maximizes cache efficiency, ensuring a smoother and faster user experience. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces an LRU cache to optimize LLM-generated recommendations, which is a significant performance improvement. The implementation uses functools.lru_cache and includes normalization of input metrics to increase cache hits. The accompanying test has been updated to verify the caching behavior. My review includes two points of feedback on backend/jules_engine.py: one regarding a logical issue where user metrics are used for caching but not for generating the recommendation, and another to improve robustness against potentially incomplete data by using safe dictionary access.
| def _get_cached_advice(event_type, height, weight, garment_id, drape, elasticity, garment_name): | ||
| """ | ||
| Generates an emotional styling tip without mentioning body numbers or sizes. | ||
| Internal helper to use functools.lru_cache for memoization. | ||
| Arguments must be hashable. | ||
| """ |
There was a problem hiding this comment.
The height and weight parameters are used as part of the cache key for _get_cached_advice, but they are not used in the prompt sent to the generative model. As a result, the LLM will generate the same advice for a given garment regardless of the user's height and weight. This makes caching on these parameters ineffective and may not produce the desired personalized recommendations based on a user's silhouette.
To fix this, you should either:
- If the advice should be personalized, include some representation of the user's silhouette (derived from height and weight) in the prompt. This should be done while respecting the rule "NEVER mention weight, height, or numeric measurements". For example, you could derive a qualitative description like 'tall and lean' or 'petite'.
- If the advice is not intended to be personalized by height and weight, remove
heightandweightfrom the arguments of this function and from the call inget_jules_advice. This will improve the cache hit rate.
| garment_data['drape'], | ||
| garment_data['elasticity'], | ||
| garment_data['name'] |
There was a problem hiding this comment.
The direct dictionary access garment_data['key'] can raise a KeyError if the key is missing in garment_data. This is inconsistent with how garment_id is retrieved using .get() on line 68, which is safer. To make the code more robust against malformed garment data, consider using the .get() method for all dictionary access. You might also want to provide default values (e.g., garment_data.get('drape', 'N/A')) to avoid passing None into the prompt string.
| garment_data['drape'], | |
| garment_data['elasticity'], | |
| garment_data['name'] | |
| garment_data.get('drape'), | |
| garment_data.get('elasticity'), | |
| garment_data.get('name') |
💡 What: Implemented a bounded LRU cache for LLM-generated styling recommendations in the backend.
🎯 Why: Generative AI calls are the most significant latency bottleneck in the application. Identical user/garment combinations were previously triggering redundant API calls, increasing costs and waiting time for users.
📊 Impact: Reduces response time from ~1.5 seconds (LLM latency) to <1ms (memory lookup) for repeat requests. Rounds biometric metrics (height/weight) to the nearest integer to handle slight sensor variations, further improving cache hit rates.
🔬 Measurement: Verified using
backend/test_jules.pywith simulated 1.5s latency. First request took 1.5017s; subsequent requests (including those with slightly varied metrics) took 0.0000-0.0001s.PR created automatically by Jules for task 6234282991731686872 started by @LVT-ENG