Conversation
- Implement `lru_cache` for Jules Engine styling advice. - Add missing `drape` and `elasticity` keys to `SHOPIFY_INVENTORY`. - Optimize performance for repeated identical AI requests from ~500ms+ to <1ms. - Add benchmark script and document learning in `.jules/bolt.md`. - Fix existing tests to match model and auth implementation. Co-authored-by: LVT-ENG <214667862+LVT-ENG@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a caching mechanism to the Jules Engine to improve the performance of AI-powered styling recommendations. By caching the results of LLM API calls, it significantly reduces response times for repeated requests. A benchmark script is included to demonstrate the performance gains. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively introduces a caching layer using functools.lru_cache to optimize the performance of the AI-powered Jules Engine, which is a great improvement. The implementation correctly uses a helper function with hashable types to enable caching, and the change is well-supported by a new benchmark script and updated tests.
My review includes two main points:
- A high-severity concern in
jules_engine.pyregarding the use of default values for garment attributes, which could lead to caching incorrect AI advice. I've suggested raising an error on missing data to ensure integrity. - A medium-severity suggestion for the new benchmark script to remove redundant code that has become obsolete due to updates in the data model, which will improve maintainability.
| return _get_jules_advice_cached( | ||
| user_data.event_type, | ||
| garment_data.get('name', 'Luxury Item'), | ||
| garment_data.get('drape', 'Fluid'), | ||
| garment_data.get('elasticity', 'Comfortable') | ||
| ) |
There was a problem hiding this comment.
The use of default values in garment_data.get(...) can lead to incorrect AI recommendations being cached. If a garment is missing name, drape, or elasticity, the function will proceed with generic defaults ('Luxury Item', 'Fluid', 'Comfortable'). This will cause the LLM to generate advice based on potentially incorrect information, and this incorrect advice will then be stored in the cache. Subsequent requests for other garments that are also missing these keys will receive the same incorrect, cached response.
To ensure data integrity and the correctness of the AI advice, it's better to handle missing keys more explicitly by raising an error. The existing try...except block in main.py will gracefully handle this and provide a fallback, preventing bad data from being cached.
| return _get_jules_advice_cached( | |
| user_data.event_type, | |
| garment_data.get('name', 'Luxury Item'), | |
| garment_data.get('drape', 'Fluid'), | |
| garment_data.get('elasticity', 'Comfortable') | |
| ) | |
| # Use the cached helper to avoid redundant LLM calls | |
| garment_name = garment_data.get('name') | |
| garment_drape = garment_data.get('drape') | |
| garment_elasticity = garment_data.get('elasticity') | |
| if not all((garment_name, garment_drape, garment_elasticity)): | |
| # Avoid caching and generating advice with incomplete data. | |
| # The calling function in main.py already handles exceptions gracefully. | |
| raise ValueError(f"Garment {garment_data.get('id', 'Unknown')} is missing required attributes for AI advice.") | |
| return _get_jules_advice_cached( | |
| user_data.event_type, | |
| garment_name, | |
| garment_drape, | |
| garment_elasticity | |
| ) |
| # Ensure garment has required keys (temporarily for this test if not fixed yet) | ||
| garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy() | ||
| if 'drape' not in garment: | ||
| garment['drape'] = "Architectural" | ||
| if 'elasticity' not in garment: | ||
| garment['elasticity'] = "High-Recovery" |
There was a problem hiding this comment.
This block of code, including the comment and the conditional checks for drape and elasticity, is now redundant. The backend/models.py file has been updated to include these keys in the SHOPIFY_INVENTORY data. Removing this temporary fix will make the benchmark script cleaner and ensure it tests against the actual data structure.
| # Ensure garment has required keys (temporarily for this test if not fixed yet) | |
| garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy() | |
| if 'drape' not in garment: | |
| garment['drape'] = "Architectural" | |
| if 'elasticity' not in garment: | |
| garment['elasticity'] = "High-Recovery" | |
| garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy() |
There was a problem hiding this comment.
Pull request overview
This PR introduces an LRU-caching layer around the Jules (LLM) styling recommendation generation to reduce repeated identical LLM calls and improve response latency for recurring garment/event combinations.
Changes:
- Added an
lru_cache-backed internal helper injules_engineand routedget_jules_advicethrough it using hashable cache keys. - Enriched Shopify inventory entries with
drapeandelasticityfields used in the prompt/caching key. - Updated the API fallback test to use authenticated payloads and verify fallback styling advice; added a benchmark script to compare cold vs warm calls.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/jules_engine.py | Adds cached helper for AI advice generation and updates wrapper to build a hashable cache key. |
| backend/models.py | Extends inventory items with fabric properties used in AI prompt/caching. |
| backend/tests/test_main.py | Updates recommend endpoint test to include auth fields and assert fallback styling advice behavior. |
| backend/benchmark_jules.py | Adds a local benchmark that mocks LLM latency to demonstrate cold vs warm cache timing. |
| .jules/bolt.md | Documents the caching approach and key design constraint (hashable inputs). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # It's a dict | ||
| garment_data = garment | ||
| elif hasattr(garment, 'dict'): | ||
| # It's a Pydantic model | ||
| garment_data = garment.dict() | ||
| else: | ||
| # Fallback | ||
| garment_data = garment |
There was a problem hiding this comment.
The fallback branch sets garment_data = garment, but the code below unconditionally calls garment_data.get(...). If garment is neither a dict-like object nor a Pydantic model, this will raise AttributeError. Consider either (a) removing the fallback and raising TypeError for unsupported types, or (b) converting unknown objects to a dict before calling .get() (e.g., via vars()), so the fallback is actually safe.
| # It's a dict | |
| garment_data = garment | |
| elif hasattr(garment, 'dict'): | |
| # It's a Pydantic model | |
| garment_data = garment.dict() | |
| else: | |
| # Fallback | |
| garment_data = garment | |
| # It's a dict or dict-like object | |
| garment_data = garment | |
| elif hasattr(garment, 'dict'): | |
| # It's a Pydantic model | |
| garment_data = garment.dict() | |
| elif hasattr(garment, '__dict__'): | |
| # Generic object: use its __dict__ representation | |
| garment_data = vars(garment) | |
| else: | |
| # Unsupported type: fail fast with a clear error instead of AttributeError on .get() | |
| raise TypeError( | |
| "garment must be a mapping with 'get', a Pydantic-like model with 'dict', " | |
| "or an object with a '__dict__' attribute" | |
| ) |
| # Mock the generative model to avoid real API calls and simulate latency | ||
| original_model = jules_engine.model | ||
| mock_model = MagicMock() | ||
|
|
||
| def mock_generate_content(prompt): | ||
| time.sleep(0.5) # Simulate 500ms LLM latency | ||
| mock_response = MagicMock() | ||
| mock_response.text = "Mocked Luxury Advice: Elegant and Fluid." | ||
| return mock_response | ||
|
|
||
| mock_model.generate_content.side_effect = mock_generate_content | ||
| jules_engine.model = mock_model | ||
|
|
There was a problem hiding this comment.
The benchmark’s “cold cache” timing can be incorrect if the process has already populated the LRU cache (e.g., if the script is run in an interactive session or reused in the same process). Clear the cache before the first timing (and optionally after restoring the model) to ensure the first call is truly cold and the second call is truly warm.
| # 2. Prepare the request payload with valid auth | ||
| user_id = "LAFAYETTE_USER" | ||
| ts = str(int(time.time())) | ||
| sig = hmac.new(SECRET_KEY.encode(), f"{user_id}:{ts}".encode(), hashlib.sha256).hexdigest() | ||
| token = f"{ts}.{sig}" |
There was a problem hiding this comment.
This test re-implements the auth token logic using the production SECRET_KEY and the current clock (time.time()), which makes the test tightly coupled to implementation details and potentially flaky. Since the goal here is to verify the Jules fallback behavior, consider monkeypatching backend.main.verify_auth to return True (or injecting a fixed timestamp) so the test is deterministic and not dependent on the secret/token algorithm.
| @lru_cache(maxsize=128) | ||
| def _get_jules_advice_cached(event_type: str, garment_name: str, drape: str, elasticity: str): | ||
| """ |
There was a problem hiding this comment.
Caching is a core behavior change here, but there’s no test asserting that repeated calls with identical inputs avoid multiple model.generate_content(...) invocations. Consider adding a unit test that monkeypatches jules_engine.model.generate_content, calls get_jules_advice twice with the same event/garment properties, and asserts the model was called only once (and that the second response is served from cache).
| "variant_id": "gid://shopify/ProductVariant/445566", | ||
| "drape": "Architectural", | ||
| "elasticity": "Medium-High Recovery" | ||
| }, |
There was a problem hiding this comment.
SHOPIFY_INVENTORY entries now include drape and elasticity, but the Garment Pydantic model doesn’t declare these fields. To keep the schema aligned (and avoid silently dropping these values when/if Garment is used), consider adding them as optional fields on Garment (or introducing a separate model for the inventory shape).
💡 What: Implemented a caching layer for the AI-powered styling recommendations in the Jules Engine using
functools.lru_cache.🎯 Why: LLM API calls are a major performance bottleneck, often taking several seconds. By caching recommendations for identical garment/event combinations, we eliminate redundant network calls and latency.
📊 Impact: Reduces response time for repeated identical requests from several seconds (or ~500ms in simulation) to approximately 0.01ms.
🔬 Measurement: Run
python3 backend/benchmark_jules.pyto verify the performance difference between cold and warm cache hits.♿ Performance: Massive reduction in main thread blocking and API usage costs.
PR created automatically by Jules for task 13023161297500766351 started by @LVT-ENG