Update Python dependencies to RHOAI index 3.4#176
Update Python dependencies to RHOAI index 3.4#176syedriko wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR updates project infrastructure and dependencies from RHOAI version 3.3 to 3.4, including updated Tekton pipeline pip package lists, removed version constraints for docling dependency, simplified build scripts by eliminating arch-specific wheel injection logic, and regenerated dependency lock files reflecting new package versions. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
scripts/konflux_requirements.sh (1)
118-128:⚠️ Potential issue | 🟠 MajorKeep per-arch hash files generated or stop consuming them.
The script still leaves
requirements.hashes.wheel.cpu.x86_64.txtandrequirements.hashes.wheel.cpu.aarch64.txtreferenced by both Tekton configs, but this flow no longer writes them. Regenerating requirements can therefore leave stale arch-specific pins in files that hermetic prefetch still consumes. Either regenerate harmless placeholders or remove those files from bothrequirements_fileslists.🛠️ Minimal placeholder-generation fix
if grep -qE '^(torch|torchvision|triton)==' "$WHEEL_HASH_FILE"; then awk ' /^torch==|^torchvision==|^triton==/ { skip=1; next } skip && /^[ \t]/ { next } skip && /^[a-zA-Z0-9]/ { skip=0 } { print } ' "$WHEEL_HASH_FILE" > "${WHEEL_HASH_FILE}.strip" && mv "${WHEEL_HASH_FILE}.strip" "$WHEEL_HASH_FILE" fi +for _arch_hash_file in "$WHEEL_HASH_CPU_X86" "$WHEEL_HASH_CPU_AARCH"; do + printf '%s\n' "# No architecture-specific wheel pins; generated by konflux_requirements.sh." > "$_arch_hash_file" +done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/konflux_requirements.sh` around lines 118 - 128, The script modifies WHEEL_HASH_FILE but no longer generates per-arch files (requirements.hashes.wheel.cpu.x86_64.txt and requirements.hashes.wheel.cpu.aarch64.txt) that Tekton recipes still reference; either ensure placeholder per-arch files are regenerated after the strip step (create harmless empty or header-only files named requirements.hashes.wheel.cpu.x86_64.txt and requirements.hashes.wheel.cpu.aarch64.txt when WHEEL_HASH_FILE is mutated), or remove those filenames from the Tekton requirements_files lists so they are not consumed; update scripts/konflux_requirements.sh to write the placeholders (using the WHEEL_HASH_FILE basename logic) or update the Tekton configs' requirements_files entries to stop referencing the arch-specific files, and make sure references to WHEEL_HASH_FILE and the two arch-specific filenames are consistent across both Tekton configs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/rag-tool-push.yaml:
- Line 64: The binary.packages allowlist is missing "llama-stack-api" while
"llama-stack" and "llama-stack-client" are present and requirements include
llama-stack-api==0.5.0; update the "packages" comma-separated string in the
binary.packages entry (the same list shown in the diff) to include
"llama-stack-api" and apply the same change in the peer pipeline config that has
an identical allowlist (the other .tekton rag-tool file), ensuring the package
name is added exactly as "llama-stack-api".
In `@requirements-build.txt`:
- Around line 23-33: The build requirements file contains conflicting duplicate
pins (e.g., hatchling and setuptools) produced by pip-compile; fix this by
resolving the source dependency conflicts in your
requirements.source.txt.build—identify where hatchling and setuptools are
required with different versions (direct or transitive), choose a single
compatible version (or add an explicit constraint/compatibility marker) in
requirements.source.txt.build, then re-run pip-compile to regenerate
requirements-build.txt so it contains a single pinned entry per package.
In `@requirements.overrides.txt`:
- Around line 21-22: The requirements override pins for torch and torchvision
violate pyproject.toml constraints: change the pinned versions in
requirements.overrides (currently "torch==2.10.0" and "torchvision==0.25.0") to
versions that respect the upper bounds specified in pyproject.toml (e.g.,
upgrade to allowed ranges such as "torch>=2.9.0,<2.10.0" and
"torchvision>=0.24.0,<0.25.0") or alternatively update pyproject.toml to
explicitly allow the newer exact versions; ensure consistency between the
override entries for "torch" and "torchvision" and the version ranges declared
in pyproject.toml.
---
Outside diff comments:
In `@scripts/konflux_requirements.sh`:
- Around line 118-128: The script modifies WHEEL_HASH_FILE but no longer
generates per-arch files (requirements.hashes.wheel.cpu.x86_64.txt and
requirements.hashes.wheel.cpu.aarch64.txt) that Tekton recipes still reference;
either ensure placeholder per-arch files are regenerated after the strip step
(create harmless empty or header-only files named
requirements.hashes.wheel.cpu.x86_64.txt and
requirements.hashes.wheel.cpu.aarch64.txt when WHEEL_HASH_FILE is mutated), or
remove those filenames from the Tekton requirements_files lists so they are not
consumed; update scripts/konflux_requirements.sh to write the placeholders
(using the WHEEL_HASH_FILE basename logic) or update the Tekton configs'
requirements_files entries to stop referencing the arch-specific files, and make
sure references to WHEEL_HASH_FILE and the two arch-specific filenames are
consistent across both Tekton configs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ac2f76b9-682d-4dfd-a7cf-6bd2aa5eca09
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
.tekton/rag-tool-pull-request.yaml.tekton/rag-tool-push.yamlREADME.mdbuild-args-konflux.confpyproject.tomlrequirements-build.txtrequirements.hashes.source.txtrequirements.hashes.wheel.pypi.txtrequirements.hashes.wheel.txtrequirements.overrides.txtscripts/konflux_requirements.sh
| "requirements_build_files": ["requirements-build.txt"], | ||
| "binary": { | ||
| "packages": "aiohappyeyeballs,aiohttp,aiosignal,annotated-doc,annotated-types,antlr4-python3-runtime,asyncpg,beautifulsoup4,cffi,colorama,cryptography,dataclasses-json,defusedxml,deprecated,dill,distro,docling-parse,einops,et-xmlfile,faiss-cpu,filetype,fire,frozenlist,greenlet,h11,hf-xet,httpcore,httpx,httpx-sse,idna,importlib-metadata,jinja2,joblib,jsonlines,jsonref,jsonschema,jsonschema-specifications,lxml,markdown-it-py,marko,markupsafe,marshmallow,mdurl,mpire,mpmath,multidict,mypy-extensions,nest-asyncio,networkx,numpy,omegaconf,openpyxl,packaging,pandas,pillow,pluggy,prompt-toolkit,propcache,psycopg2-binary,pycparser,pydantic,pydantic-core,pylatexenc,pypdfium2,python-dateutil,python-docx,python-pptx,pyyaml,referencing,rpds-py,rtree,safetensors,scikit-learn,scipy,setuptools,shapely,shellingham,six,sniffio,sympy,termcolor,threadpoolctl,tiktoken,tomlkit,torchvision,tqdm,transformers,tree-sitter,tree-sitter-c,tree-sitter-javascript,tree-sitter-python,tree-sitter-typescript,triton,typing-extensions,typing-inspect,typing-inspection,urllib3,xlsxwriter,zipp,uv-build,uv,pip,maturin,jiter,opencv-python,rapidocr,tokenizers,torch", | ||
| "packages": "accelerate,aiohappyeyeballs,aiohttp,aiosignal,aiosqlite,annotated-doc,annotated-types,antlr4-python3-runtime,anyio,asyncpg,attrs,banks,beautifulsoup4,certifi,cffi,chardet,charset-normalizer,circuitbreaker,click,colorama,colorlog,cryptography,dataclasses-json,defusedxml,deprecated,dill,dirtyjson,distro,docling-ibm-models,docling-parse,einops,et-xmlfile,faiss-cpu,filetype,fire,frozenlist,fsspec,googleapis-common-protos,greenlet,griffe,griffecli,griffelib,h11,hf-xet,httpcore,httpx,httpx-sse,importlib-metadata,jinja2,jiter,joblib,jsonlines,jsonref,jsonschema,jsonschema-specifications,latex2mathml,llama-index-embeddings-openai,llama-index-instrumentation,llama-index-llms-openai,llama-index-workflows,llama-stack,llama-stack-client,markdown-it-py,marko,markupsafe,marshmallow,mcp,mdurl,mpire,mpmath,multidict,mypy-extensions,nest-asyncio,networkx,nltk,numpy,omegaconf,opencv-python,openpyxl,opentelemetry-api,opentelemetry-exporter-otlp-proto-common,opentelemetry-exporter-otlp-proto-http,opentelemetry-proto,opentelemetry-sdk,opentelemetry-semantic-conventions,oracledb,packaging,pandas,pgvector,pillow,platformdirs,pluggy,polyfactory,prompt-toolkit,propcache,protobuf,psutil,pyaml,pycparser,pygments,pyjwt,pylatexenc,pyopenssl,pypdf,pypdfium2,python-dateutil,python-docx,python-dotenv,python-multipart,python-pptx,pytz,pyyaml,referencing,regex,requests,rich,rpds-py,rtree,safetensors,scikit-learn,scipy,semchunk,setuptools,shapely,shellingham,six,sniffio,soupsieve,sqlalchemy,sse-starlette,starlette,striprtf,sympy,tabulate,tenacity,termcolor,threadpoolctl,tiktoken,tinytag,tokenizers,tomlkit,torch,torchvision,tornado,tqdm,transformers,tree-sitter,tree-sitter-c,tree-sitter-javascript,tree-sitter-python,tree-sitter-typescript,triton,typer,typing-extensions,typing-inspect,typing-inspection,tzdata,urllib3,wcwidth,websockets,wrapt,xlsxwriter,yarl,zipp,uv-build,uv,pip,maturin,psycopg2-binary,rapidocr", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the two YAML files
find . -name "rag-tool-push.yaml" -o -name "rag-tool-pull-request.yaml" | head -20Repository: lightspeed-core/rag-content
Length of output: 136
🏁 Script executed:
# Search for wheel hash requirements files
find . -name "*requirements*" -o -name "*wheel*" | grep -E "\.(txt|lock)$" | head -20Repository: lightspeed-core/rag-content
Length of output: 626
🏁 Script executed:
# Check if llama-stack-api appears in the YAML files
rg "llama-stack-api" .tekton/ 2>/dev/null || echo "No matches found"Repository: lightspeed-core/rag-content
Length of output: 87
🏁 Script executed:
# Get the full content of rag-tool-push.yaml to verify line 64
cat .tekton/rag-tool-push.yaml | head -80Repository: lightspeed-core/rag-content
Length of output: 4671
🏁 Script executed:
# Check all requirements files for llama-stack-api
rg "llama-stack-api" ./requirements*.txt 2>/dev/null || echo "No matches in requirements files"Repository: lightspeed-core/rag-content
Length of output: 397
🏁 Script executed:
# Check the pull-request YAML for comparison
cat .tekton/rag-tool-pull-request.yaml | grep -A 5 "llama-stack"Repository: lightspeed-core/rag-content
Length of output: 2128
🏁 Script executed:
# Extract the packages line from pull-request YAML to see if it has llama-stack-api
cat .tekton/rag-tool-pull-request.yaml | sed -n '50,80p'Repository: lightspeed-core/rag-content
Length of output: 2835
🏁 Script executed:
# Extract and compare the packages lists from both YAML files
echo "=== PUSH YAML packages ===" && \
cat .tekton/rag-tool-push.yaml | sed -n '64p' | grep -o '"packages": "[^"]*"' | cut -d'"' -f4 | tr ',' '\n' | grep llama && \
echo "=== PULL-REQUEST YAML packages ===" && \
cat .tekton/rag-tool-pull-request.yaml | grep -o '"packages": "[^"]*"' | cut -d'"' -f4 | tr ',' '\n' | grep llamaRepository: lightspeed-core/rag-content
Length of output: 402
🏁 Script executed:
# Verify that llama-stack-api is in source requirements files
grep -l "llama-stack-api" ./requirements.hashes.source*.txtRepository: lightspeed-core/rag-content
Length of output: 141
Add llama-stack-api to the binary packages allowlist.
Both .tekton/rag-tool-push.yaml and .tekton/rag-tool-pull-request.yaml use identical package allowlists that include llama-stack and llama-stack-client but omit llama-stack-api. However, llama-stack-api==0.5.0 appears in requirements.hashes.source.txt and requirements.hashes.source.cuda.txt. Add it to the binary.packages list to maintain consistency with requirements files across both pipeline configurations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.tekton/rag-tool-push.yaml at line 64, The binary.packages allowlist is
missing "llama-stack-api" while "llama-stack" and "llama-stack-client" are
present and requirements include llama-stack-api==0.5.0; update the "packages"
comma-separated string in the binary.packages entry (the same list shown in the
diff) to include "llama-stack-api" and apply the same change in the peer
pipeline config that has an identical allowlist (the other .tekton rag-tool
file), ensuring the package name is added exactly as "llama-stack-api".
| hatch-vcs==0.5.0 | ||
| # via | ||
| # attrs | ||
| # chardet | ||
| # filelock | ||
| # fsspec | ||
| # platformdirs | ||
| # via filelock | ||
| hatchling==1.26.3 | ||
| # via | ||
| # hatch-fancy-pypi-readme | ||
| # llama-stack-client | ||
| # openai | ||
| hatchling==1.29.0 | ||
| # via | ||
| # attrs | ||
| # banks | ||
| # chardet | ||
| # expandvars | ||
| # filelock | ||
| # fsspec | ||
| # griffe | ||
| # griffecli | ||
| # griffelib | ||
| # hatch-fancy-pypi-readme | ||
| # hatch-vcs |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Detect duplicate pinned package names in requirements-build.txt.
set -euo pipefail
python - <<'PY'
import re
from collections import defaultdict
fname = "requirements-build.txt"
pat = re.compile(r'^([A-Za-z0-9_.-]+)==([^\s]+)')
seen = defaultdict(list)
with open(fname, encoding="utf-8") as f:
for i, line in enumerate(f, 1):
m = pat.match(line.strip())
if m:
seen[m.group(1).lower()].append((i, m.group(2)))
dups = {k:v for k,v in seen.items() if len(v) > 1}
if not dups:
print("No duplicate pinned package names found.")
else:
print("Duplicate pinned package names:")
for pkg, rows in sorted(dups.items()):
print(f"- {pkg}: " + ", ".join([f'line {ln} -> {ver}' for ln, ver in rows]))
PYRepository: lightspeed-core/rag-content
Length of output: 203
🏁 Script executed:
cat -n requirements-build.txt | head -90Repository: lightspeed-core/rag-content
Length of output: 2561
Fix conflicting duplicate package pins in auto-generated build requirements.
This auto-generated file contains multiple unpinned versions of the same packages (hatchling at lines 25 and 29, setuptools at lines 79 and 81). pip-compile should not produce this output. The duplicates indicate unresolved dependency conflicts in your source specification. When pip installs this file, it uses only the last occurrence of each package, silently dropping the others, making the resolution non-deterministic and hiding the underlying constraint conflict.
Fix by resolving the conflicting requirements in your source dependency file (requirements.source.txt.build).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements-build.txt` around lines 23 - 33, The build requirements file
contains conflicting duplicate pins (e.g., hatchling and setuptools) produced by
pip-compile; fix this by resolving the source dependency conflicts in your
requirements.source.txt.build—identify where hatchling and setuptools are
required with different versions (direct or transitive), choose a single
compatible version (or add an explicit constraint/compatibility marker) in
requirements.source.txt.build, then re-run pip-compile to regenerate
requirements-build.txt so it contains a single pinned entry per package.
| torch==2.10.0 | ||
| torchvision==0.25.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify torch/torchvision constraints in pyproject.toml and overrides side-by-side.
set -euo pipefail
echo "== pyproject constraints =="
rg -n '^\s*"torch(vision)?' pyproject.toml -n -C1
echo
echo "== requirements.overrides pins =="
rg -n '^(torch|torchvision)=='Repository: lightspeed-core/rag-content
Length of output: 360
Torch and torchvision override pins violate project constraints in pyproject.toml.
Lines 21-22 pin torch==2.10.0 and torchvision==0.25.0, but pyproject.toml constrains these to torch>=2.9.0,<2.10.0 and torchvision>=0.24.0,<0.25.0. Update the overrides to respect the upper bound constraints or update pyproject.toml to align with the intended versions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@requirements.overrides.txt` around lines 21 - 22, The requirements override
pins for torch and torchvision violate pyproject.toml constraints: change the
pinned versions in requirements.overrides (currently "torch==2.10.0" and
"torchvision==0.25.0") to versions that respect the upper bounds specified in
pyproject.toml (e.g., upgrade to allowed ranges such as "torch>=2.9.0,<2.10.0"
and "torchvision>=0.24.0,<0.25.0") or alternatively update pyproject.toml to
explicitly allow the newer exact versions; ensure consistency between the
override entries for "torch" and "torchvision" and the version ranges declared
in pyproject.toml.
7ea3892 to
55d6924
Compare
55d6924 to
262793b
Compare
Description
Update Python dependencies to RHOAI index 3.4
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Release Notes