Skip to content

⚡ Bolt: Cache AI recommendations in Jules Engine#25

Open
LVT-ENG wants to merge 1 commit intomainfrom
bolt-cache-ai-recommendations-13023161297500766351
Open

⚡ Bolt: Cache AI recommendations in Jules Engine#25
LVT-ENG wants to merge 1 commit intomainfrom
bolt-cache-ai-recommendations-13023161297500766351

Conversation

@LVT-ENG
Copy link
Copy Markdown
Member

@LVT-ENG LVT-ENG commented Mar 23, 2026

💡 What: Implemented a caching layer for the AI-powered styling recommendations in the Jules Engine using functools.lru_cache.

🎯 Why: LLM API calls are a major performance bottleneck, often taking several seconds. By caching recommendations for identical garment/event combinations, we eliminate redundant network calls and latency.

📊 Impact: Reduces response time for repeated identical requests from several seconds (or ~500ms in simulation) to approximately 0.01ms.

🔬 Measurement: Run python3 backend/benchmark_jules.py to verify the performance difference between cold and warm cache hits.

♿ Performance: Massive reduction in main thread blocking and API usage costs.


PR created automatically by Jules for task 13023161297500766351 started by @LVT-ENG

- Implement `lru_cache` for Jules Engine styling advice.
- Add missing `drape` and `elasticity` keys to `SHOPIFY_INVENTORY`.
- Optimize performance for repeated identical AI requests from ~500ms+ to <1ms.
- Add benchmark script and document learning in `.jules/bolt.md`.
- Fix existing tests to match model and auth implementation.

Co-authored-by: LVT-ENG <214667862+LVT-ENG@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 23, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
_deploy_build Ready Ready Preview, Comment Mar 23, 2026 5:50am
tryonyou-org Ready Ready Preview, Comment Mar 23, 2026 5:50am

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a caching mechanism to the Jules Engine to improve the performance of AI-powered styling recommendations. By caching the results of LLM API calls, it significantly reduces response times for repeated requests. A benchmark script is included to demonstrate the performance gains.

Highlights

  • Performance Improvement: Implemented an LRU cache using functools.lru_cache to cache AI-powered styling recommendations in the Jules Engine.
  • Bottleneck Resolution: Addressed the performance bottleneck caused by LLM API calls by caching recommendations for identical garment/event combinations.
  • Performance Measurement: Added a benchmark script (backend/benchmark_jules.py) to verify the performance difference between cold and warm cache hits.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively introduces a caching layer using functools.lru_cache to optimize the performance of the AI-powered Jules Engine, which is a great improvement. The implementation correctly uses a helper function with hashable types to enable caching, and the change is well-supported by a new benchmark script and updated tests.

My review includes two main points:

  • A high-severity concern in jules_engine.py regarding the use of default values for garment attributes, which could lead to caching incorrect AI advice. I've suggested raising an error on missing data to ensure integrity.
  • A medium-severity suggestion for the new benchmark script to remove redundant code that has become obsolete due to updates in the data model, which will improve maintainability.

Comment on lines +65 to +70
return _get_jules_advice_cached(
user_data.event_type,
garment_data.get('name', 'Luxury Item'),
garment_data.get('drape', 'Fluid'),
garment_data.get('elasticity', 'Comfortable')
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of default values in garment_data.get(...) can lead to incorrect AI recommendations being cached. If a garment is missing name, drape, or elasticity, the function will proceed with generic defaults ('Luxury Item', 'Fluid', 'Comfortable'). This will cause the LLM to generate advice based on potentially incorrect information, and this incorrect advice will then be stored in the cache. Subsequent requests for other garments that are also missing these keys will receive the same incorrect, cached response.

To ensure data integrity and the correctness of the AI advice, it's better to handle missing keys more explicitly by raising an error. The existing try...except block in main.py will gracefully handle this and provide a fallback, preventing bad data from being cached.

Suggested change
return _get_jules_advice_cached(
user_data.event_type,
garment_data.get('name', 'Luxury Item'),
garment_data.get('drape', 'Fluid'),
garment_data.get('elasticity', 'Comfortable')
)
# Use the cached helper to avoid redundant LLM calls
garment_name = garment_data.get('name')
garment_drape = garment_data.get('drape')
garment_elasticity = garment_data.get('elasticity')
if not all((garment_name, garment_drape, garment_elasticity)):
# Avoid caching and generating advice with incomplete data.
# The calling function in main.py already handles exceptions gracefully.
raise ValueError(f"Garment {garment_data.get('id', 'Unknown')} is missing required attributes for AI advice.")
return _get_jules_advice_cached(
user_data.event_type,
garment_name,
garment_drape,
garment_elasticity
)

Comment on lines +36 to +41
# Ensure garment has required keys (temporarily for this test if not fixed yet)
garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy()
if 'drape' not in garment:
garment['drape'] = "Architectural"
if 'elasticity' not in garment:
garment['elasticity'] = "High-Recovery"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code, including the comment and the conditional checks for drape and elasticity, is now redundant. The backend/models.py file has been updated to include these keys in the SHOPIFY_INVENTORY data. Removing this temporary fix will make the benchmark script cleaner and ensure it tests against the actual data structure.

Suggested change
# Ensure garment has required keys (temporarily for this test if not fixed yet)
garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy()
if 'drape' not in garment:
garment['drape'] = "Architectural"
if 'elasticity' not in garment:
garment['elasticity'] = "High-Recovery"
garment = SHOPIFY_INVENTORY["BALMAIN_SS26_SLIM"].copy()

Copy link
Copy Markdown
Member Author

@LVT-ENG LVT-ENG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces an LRU-caching layer around the Jules (LLM) styling recommendation generation to reduce repeated identical LLM calls and improve response latency for recurring garment/event combinations.

Changes:

  • Added an lru_cache-backed internal helper in jules_engine and routed get_jules_advice through it using hashable cache keys.
  • Enriched Shopify inventory entries with drape and elasticity fields used in the prompt/caching key.
  • Updated the API fallback test to use authenticated payloads and verify fallback styling advice; added a benchmark script to compare cold vs warm calls.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
backend/jules_engine.py Adds cached helper for AI advice generation and updates wrapper to build a hashable cache key.
backend/models.py Extends inventory items with fabric properties used in AI prompt/caching.
backend/tests/test_main.py Updates recommend endpoint test to include auth fields and assert fallback styling advice behavior.
backend/benchmark_jules.py Adds a local benchmark that mocks LLM latency to demonstrate cold vs warm cache timing.
.jules/bolt.md Documents the caching approach and key design constraint (hashable inputs).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +55 to +62
# It's a dict
garment_data = garment
elif hasattr(garment, 'dict'):
# It's a Pydantic model
garment_data = garment.dict()
else:
# Fallback
garment_data = garment
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fallback branch sets garment_data = garment, but the code below unconditionally calls garment_data.get(...). If garment is neither a dict-like object nor a Pydantic model, this will raise AttributeError. Consider either (a) removing the fallback and raising TypeError for unsupported types, or (b) converting unknown objects to a dict before calling .get() (e.g., via vars()), so the fallback is actually safe.

Suggested change
# It's a dict
garment_data = garment
elif hasattr(garment, 'dict'):
# It's a Pydantic model
garment_data = garment.dict()
else:
# Fallback
garment_data = garment
# It's a dict or dict-like object
garment_data = garment
elif hasattr(garment, 'dict'):
# It's a Pydantic model
garment_data = garment.dict()
elif hasattr(garment, '__dict__'):
# Generic object: use its __dict__ representation
garment_data = vars(garment)
else:
# Unsupported type: fail fast with a clear error instead of AttributeError on .get()
raise TypeError(
"garment must be a mapping with 'get', a Pydantic-like model with 'dict', "
"or an object with a '__dict__' attribute"
)

Copilot uses AI. Check for mistakes.
Comment on lines +15 to +27
# Mock the generative model to avoid real API calls and simulate latency
original_model = jules_engine.model
mock_model = MagicMock()

def mock_generate_content(prompt):
time.sleep(0.5) # Simulate 500ms LLM latency
mock_response = MagicMock()
mock_response.text = "Mocked Luxury Advice: Elegant and Fluid."
return mock_response

mock_model.generate_content.side_effect = mock_generate_content
jules_engine.model = mock_model

Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark’s “cold cache” timing can be incorrect if the process has already populated the LRU cache (e.g., if the script is run in an interactive session or reused in the same process). Clear the cache before the first timing (and optionally after restoring the model) to ensure the first call is truly cold and the second call is truly warm.

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +27
# 2. Prepare the request payload with valid auth
user_id = "LAFAYETTE_USER"
ts = str(int(time.time()))
sig = hmac.new(SECRET_KEY.encode(), f"{user_id}:{ts}".encode(), hashlib.sha256).hexdigest()
token = f"{ts}.{sig}"
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test re-implements the auth token logic using the production SECRET_KEY and the current clock (time.time()), which makes the test tightly coupled to implementation details and potentially flaky. Since the goal here is to verify the Jules fallback behavior, consider monkeypatching backend.main.verify_auth to return True (or injecting a fixed timestamp) so the test is deterministic and not dependent on the secret/token algorithm.

Copilot uses AI. Check for mistakes.
Comment on lines +21 to 23
@lru_cache(maxsize=128)
def _get_jules_advice_cached(event_type: str, garment_name: str, drape: str, elasticity: str):
"""
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caching is a core behavior change here, but there’s no test asserting that repeated calls with identical inputs avoid multiple model.generate_content(...) invocations. Consider adding a unit test that monkeypatches jules_engine.model.generate_content, calls get_jules_advice twice with the same event/garment properties, and asserts the model was called only once (and that the second response is served from cache).

Copilot uses AI. Check for mistakes.
Comment on lines +28 to 31
"variant_id": "gid://shopify/ProductVariant/445566",
"drape": "Architectural",
"elasticity": "Medium-High Recovery"
},
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SHOPIFY_INVENTORY entries now include drape and elasticity, but the Garment Pydantic model doesn’t declare these fields. To keep the schema aligned (and avoid silently dropping these values when/if Garment is used), consider adding them as optional fields on Garment (or introducing a separate model for the inventory shape).

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Member Author

@LVT-ENG LVT-ENG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants