Skip to content

fix: Add missing dependencies and remove dead import in cloud-edge LLM example#334

Open
BhoomiAgrawal12 wants to merge 1 commit intokubeedge:mainfrom
BhoomiAgrawal12:fix/cloud-edge-llm-dependencies
Open

fix: Add missing dependencies and remove dead import in cloud-edge LLM example#334
BhoomiAgrawal12 wants to merge 1 commit intokubeedge:mainfrom
BhoomiAgrawal12:fix/cloud-edge-llm-dependencies

Conversation

@BhoomiAgrawal12
Copy link

Description:

Problem

The cloud-edge collaborative inference for LLM example fails to run on fresh installations due to missing dependencies and a dead code import. This affects new users trying to run this example.

What This PR Fixes

This PR fixes 4 bugs that block the example from running:

  1. Missing sedna Framework
  2. Missing colorlog Dependency
  3. Missing retry Dependency
  4. Dead Import LadeSpecDecLLM

Changes Made

Files Modified:

  • requirements.txt - Added colorlog and sedna
  • examples/cloud-edge-collaborative-inference-for-llm/requirements.txt - Added retry
  • examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/edge_model.py - Removed dead import

Testing

Tested on Ubuntu 24.04 installation:

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -e .
pip install -r examples/cloud-edge-collaborative-inference-for-llm/requirements.txt

# All imports now work correctly
ianvs -f examples/cloud-edge-collaborative-inference-for-llm/benchmarkingjob.yaml
# Benchmark runs successfully

Additional Context

Discovered during LFX Mentorship 2026 pre-test for Issue #304 (Cloud-Edge Simulation Benchmark for LLM Speculative Decoding).

Related Issues

Fixes #333

…M example(kubeedge#333)

Signed-off-by: BhoomiAgrawal12 <bhoomiagrawal1212@gmail.com>
Copilot AI review requested due to automatic review settings February 9, 2026 19:25
@kubeedge-bot
Copy link
Collaborator

Welcome @BhoomiAgrawal12! It looks like this is your first PR to kubeedge/ianvs 🎉

@kubeedge-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: BhoomiAgrawal12
To complete the pull request process, please assign moorezheng after the PR has been reviewed.
You can assign the PR to them by writing /assign @moorezheng in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @BhoomiAgrawal12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves several critical issues that were preventing the successful execution of the cloud-edge collaborative inference for LLM example. By incorporating essential Python dependencies and eliminating an unused import, these changes guarantee that the example can be run without errors on new installations, thereby enhancing the user experience for those exploring this feature.

Highlights

  • Dependency Resolution: Addresses critical missing dependencies, including sedna, colorlog, and retry, which previously prevented the cloud-edge LLM example from running on fresh installations.
  • Code Cleanup: Removes a dead import for LadeSpecDecLLM from edge_model.py, improving code hygiene and reducing potential errors.
  • Example Functionality: Ensures the cloud-edge collaborative inference for LLM example is now fully functional and runnable, significantly improving the onboarding experience for new users.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/cloud-edge-collaborative-inference-for-llm/requirements.txt
    • Added the retry package to ensure robust operations within the example's environment.
  • examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/edge_model.py
    • Removed the unused import LadeSpecDecLLM, cleaning up the module and removing dead code.
  • requirements.txt
    • Added colorlog>=6.10.0 for enhanced logging capabilities across the project.
    • Included ./examples/resources/third_party/sedna-0.6.0.1-py3-none-any.whl to integrate the Sedna framework as a core dependency.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kubeedge-bot kubeedge-bot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Feb 9, 2026
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix issues preventing an LLM example from running by adding missing dependencies and removing a supposedly dead import. The changes correctly add retry, colorlog, and sedna as dependencies. However, I've found a critical issue where a removed import, LadeSpecDecLLM, is still used in the code, which will cause a runtime error. Additionally, for better maintainability and reproducibility, I've made a few suggestions regarding dependency management, such as pinning a version and reconsidering the practice of including a wheel file directly in the repository.

from core.common.log import LOGGER
from sedna.common.class_factory import ClassType, ClassFactory
from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel, LadeSpecDecLLM
from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

You have removed the import for LadeSpecDecLLM, but its usage still exists on line 77 within the load method. This will cause a NameError if the backend is configured as 'LadeSpecDec'. To fix this, you should also remove the logic block that handles the LadeSpecDec backend.

kaggle
groq No newline at end of file
groq
retry

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better reproducibility of this example, it's a good practice to pin dependency versions. Please consider adding a specific version for the retry package. The latest version on PyPI is 0.9.2.

retry==0.9.2

matplotlib
onnx No newline at end of file
onnx
colorlog>=6.10.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are trailing spaces at the end of this line. Please remove them for consistency and cleanliness.

colorlog>=6.10.0

onnx No newline at end of file
onnx
colorlog>=6.10.0
./examples/resources/third_party/sedna-0.6.0.1-py3-none-any.whl

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Including wheel files for dependencies directly in the repository is generally discouraged as it bloats the repository size and can complicate dependency management. Is it possible to install sedna from a package index like PyPI? If this is a private or modified version, hosting it on a private package index would be a better practice.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes the cloud-edge collaborative LLM example setup on fresh installs by ensuring required Python dependencies are installed and by removing a failing dead import that prevents the example from starting.

Changes:

  • Add missing core dependencies (colorlog, Sedna wheel) to root requirements.txt.
  • Add missing example dependency (retry) to the example’s requirements.txt.
  • Remove a non-existent model import from the LLM edge model example module.

Reviewed changes

Copilot reviewed 2 out of 3 changed files in this pull request and generated 1 comment.

File Description
requirements.txt Adds missing logging dependency and installs Sedna from the repo-bundled wheel to unblock fresh installs.
examples/cloud-edge-collaborative-inference-for-llm/requirements.txt Adds retry needed by the API-based LLM implementation.
examples/cloud-edge-collaborative-inference-for-llm/testalgorithms/query-routing/edge_model.py Removes a dead/non-existent import that previously caused startup ImportError.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 20 to 22
from sedna.common.class_factory import ClassType, ClassFactory
from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel, LadeSpecDecLLM
from models import HuggingfaceLLM, APIBasedLLM, VllmLLM, EagleSpecDecModel

Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LadeSpecDecLLM was removed from the imports, but load() still references it when self.backend == "LadeSpecDec". This will raise a NameError if that backend option is ever enabled/used. Either remove the LadeSpecDec branch (and any related docs/config), or restore a valid implementation/import for LadeSpecDecLLM and include it in the supported backend validation list.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Cloud-Edge LLM Example] Missing dependencies and dead code import block benchmark execution

3 participants