fix: Add missing dependencies and remove dead import in cloud-edge LLM example#334
Conversation
…M example(kubeedge#333) Signed-off-by: BhoomiAgrawal12 <bhoomiagrawal1212@gmail.com>
|
Welcome @BhoomiAgrawal12! It looks like this is your first PR to kubeedge/ianvs 🎉 |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: BhoomiAgrawal12 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @BhoomiAgrawal12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves several critical issues that were preventing the successful execution of the cloud-edge collaborative inference for LLM example. By incorporating essential Python dependencies and eliminating an unused import, these changes guarantee that the example can be run without errors on new installations, thereby enhancing the user experience for those exploring this feature. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to fix issues preventing an LLM example from running by adding missing dependencies and removing a supposedly dead import. The changes correctly add retry, colorlog, and sedna as dependencies. However, I've found a critical issue where a removed import, LadeSpecDecLLM, is still used in the code, which will cause a runtime error. Additionally, for better maintainability and reproducibility, I've made a few suggestions regarding dependency management, such as pinning a version and reconsidering the practice of including a wheel file directly in the repository.
| from core.common.log import LOGGER | ||
| from sedna.common.class_factory import ClassType, ClassFactory | ||
| from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel, LadeSpecDecLLM | ||
| from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel |
There was a problem hiding this comment.
| kaggle | ||
| groq No newline at end of file | ||
| groq | ||
| retry |
| matplotlib | ||
| onnx No newline at end of file | ||
| onnx | ||
| colorlog>=6.10.0 |
| onnx No newline at end of file | ||
| onnx | ||
| colorlog>=6.10.0 | ||
| ./examples/resources/third_party/sedna-0.6.0.1-py3-none-any.whl |
There was a problem hiding this comment.
Including wheel files for dependencies directly in the repository is generally discouraged as it bloats the repository size and can complicate dependency management. Is it possible to install sedna from a package index like PyPI? If this is a private or modified version, hosting it on a private package index would be a better practice.
There was a problem hiding this comment.
Pull request overview
Fixes the cloud-edge collaborative LLM example setup on fresh installs by ensuring required Python dependencies are installed and by removing a failing dead import that prevents the example from starting.
Changes:
- Add missing core dependencies (
colorlog, Sedna wheel) to rootrequirements.txt. - Add missing example dependency (
retry) to the example’srequirements.txt. - Remove a non-existent model import from the LLM edge model example module.
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
requirements.txt |
Adds missing logging dependency and installs Sedna from the repo-bundled wheel to unblock fresh installs. |
examples/cloud-edge-collaborative-inference-for-llm/requirements.txt |
Adds retry needed by the API-based LLM implementation. |
examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/edge_model.py |
Removes a dead/non-existent import that previously caused startup ImportError. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| from sedna.common.class_factory import ClassType, ClassFactory | ||
| from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel, LadeSpecDecLLM | ||
| from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel | ||
|
|
There was a problem hiding this comment.
LadeSpecDecLLM was removed from the imports, but load() still references it when self.backend == "LadeSpecDec". This will raise a NameError if that backend option is ever enabled/used. Either remove the LadeSpecDec branch (and any related docs/config), or restore a valid implementation/import for LadeSpecDecLLM and include it in the supported backend validation list.
Description:
Problem
The cloud-edge collaborative inference for LLM example fails to run on fresh installations due to missing dependencies and a dead code import. This affects new users trying to run this example.
What This PR Fixes
This PR fixes 4 bugs that block the example from running:
sednaFrameworkcolorlogDependencyretryDependencyLadeSpecDecLLMChanges Made
Files Modified:
requirements.txt- Addedcolorlogandsednaexamples/cloud-edge-collaborative-inference-for-llm/requirements.txt- Addedretryexamples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/edge_model.py- Removed dead importTesting
Tested on Ubuntu 24.04 installation:
Additional Context
Discovered during LFX Mentorship 2026 pre-test for Issue #304 (Cloud-Edge Simulation Benchmark for LLM Speculative Decoding).
Related Issues
Fixes #333