Skip to content

Conversation

renovate[bot]
Copy link

@renovate renovate bot commented Aug 13, 2025

Coming soon: The Renovate bot (GitHub App) will be renamed to Mend. PRs from Renovate will soon appear from 'Mend'. Learn more here.

This PR contains the following updates:

Package Change Age Confidence
ray ==1.8.0 -> ==2.43.0 age confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.

GitHub Vulnerability Alerts

CVE-2023-6019

A command injection exists in Ray's cpu_profile URL parameter allowing attackers to execute os commands on the system running the ray dashboard remotely without authentication.

CVE-2023-6020

LFI in Ray's /static/ directory allows attackers to read any file on the server without authentication. The issue is fixed in version 2.8.1+. Ray maintainers response can be found here: https://www.anyscale.com/blog/update-on-ray-cves-cve-2023-6019-cve-2023-6020-cve-2023-6021-cve-2023-48022-cve-2023-48023

CVE-2023-6021

LFI in Ray's log API endpoint allows attackers to read any file on the server without authentication. The issue is fixed in version 2.8.1+. Ray maintainers response can be found here: https://www.anyscale.com/blog/update-on-ray-cves-cve-2023-6019-cve-2023-6020-cve-2023-6021-cve-2023-48022-cve-2023-48023

CVE-2025-1979

Versions of the package ray before 2.43.0 are vulnerable to Insertion of Sensitive Information into Log File where the redis password is being logged in the standard logging. If the redis password is passed as an argument, it will be logged and could potentially leak the password.

This is only exploitable if:

  1. Logging is enabled;

  2. Redis is using password authentication;

  3. Those logs are accessible to an attacker, who can reach that redis instance.

Note:

It is recommended that anyone who is running in this configuration should update to the latest version of Ray, then rotate their redis password.


Release Notes

ray-project/ray (ray)

v2.43.0

Compare Source

Highlights
  • This release features new modules in Ray Serve and Ray Data for integration with large language models, marking the first step of addressing #​50639. Existing Ray Data and Ray Serve have limited support for LLM deployments, where users have to manually configure and manage the underlying LLM engine. In this release, we offer APIs for both batch inference and serving of LLMs within Ray in ray.data.llm and ray.serve.llm. See the below notes for more details. These APIs are marked as alpha -- meaning they may change in future releases without a deprecation period.
  • Ray Train V2 is available to try starting in Ray 2.43! Run your next Ray Train job with the RAY_TRAIN_V2_ENABLED=1 environment variable. See the migration guide for more information.
  • A new integration with uv run that allows easily specifying Python dependencies for both driver and workers in a consistent way and enables quick iterations for development of Ray applications (#​50160, 50462), check out our blog post
Ray Libraries
Ray Data

🎉 New Features:

  • Ray Data LLM: We are introducing a new module in Ray Data for batch inference with LLMs (currently marked as alpha). It offers a new Processor abstraction that interoperates with existing Ray Data pipelines. This abstraction can be configured two ways:
    • Using the vLLMEngineProcessorConfig, which configures vLLM to load model replicas for high throughput model inference
    • Using the HttpRequestProcessorConfig, which sends HTTP requests to an OpenAI-compatible endpoint for inference.
    • Documentation for these features can be found here.
  • Implement accurate memory accounting for UnionOperator (#​50436)
  • Implement accurate memory accounting for all-to-all operations (#​50290)

💫 Enhancements:

  • Support class constructor args for filter() (#​50245)
  • Persist ParquetDatasource metadata. (#​50332)
  • Rebasing ShufflingBatcher onto try_combine_chunked_columns (#​50296)
  • Improve warning message if required dependency isn't installed (#​50464)
  • Move data-related test logic out of core tests directory (#​50482)
  • Pass executor as an argument to ExecutionCallback (#​50165)
  • Add operator id info to task+actor (#​50323)
  • Abstracting common methods, removing duplication in ArrowBlockAccessor, PandasBlockAccessor (#​50498)
  • Warn if map UDF is too large (#​50611)
  • Replace AggregateFn with AggregateFnV2, cleaning up Aggregation infrastructure (#​50585)
  • Simplify Operator.repr (#​50620)
  • Adding in TaskDurationStats and on_execution_step callback (#​50766)
  • Print Resource Manager stats in release tests (#​50801)

🔨 Fixes:

  • Fix invalid escape sequences in grouped_data.py docstrings (#​50392)
  • Deflake test_map_batches_async_generator (#​50459)
  • Avoid memory leak with pyarrow.infer_type on datetime arrays (#​50403)
  • Fix parquet partition cols to support tensors types (#​50591)
  • Fixing aggregation protocol to be appropriately associative (#​50757)

📖 Documentation:

  • Remove "Stable Diffusion Batch Prediction with Ray Data" example (#​50460)
Ray Train

🎉 New Features:

  • Ray Train V2 is available to try starting in Ray 2.43! Run your next Ray Train job with the RAY_TRAIN_V2_ENABLED=1 environment variable. See the migration guide for more information.

💫 Enhancements:

  • Add a training ingest benchmark release test (#​50019, #​50299) with a fault tolerance variant (#​50399)
  • Add telemetry for Trainer usage in V2 (#​50321)
  • Add pydantic as a ray[train] extra install (#​46682)
  • Add state tracking to train v2 to make run status, run attempts, and training worker metadata observable (#​50515)

🔨 Fixes:

📖 Documentation:

  • Add missing xgboost pip install in example (#​50232)

🏗 Architecture refactoring:

Ray Tune

🔨 Fixes:

📖 Documentation:

  • Update all doc examples off of ray.train imports (#​50458)
  • Update all ray/tune/examples off of ray.train imports (#​50435)
  • Fix typos in persistent storage guide (#​50127)
  • Remove Binder notebook links in Ray Tune docs (#​50621)

🏗 Architecture refactoring:

  • Update RLlib to use ray.tune imports instead of ray.air and ray.train (#​49895)
Ray Serve

🎉 New Features:

  • Ray Serve LLM: We are introducing a new module in Ray Serve to easily integrate open source LLMs in your Ray Serve deployment, currently marked as alpha. This opens up a powerful capability of composing complex applications with multiple LLMs, which is a use case in emerging applications like agentic workflows. Ray Serve LLM offers a couple core components, including:
    • VLLMService: A prebuilt deployment that offers a full-featured vLLM engine integration, with support for features such as LoRA multiplexing and multimodal language models.
    • LLMRouter: An out-of-the-box OpenAI compatible model router that can route across multiple LLM deployments.
    • Documentation can be found at https://docs.ray.io/en/releases-2.43.0/serve/llm/overview.html

💫 Enhancements:

  • Add required_resources to REST API (#​50058)

🔨 Fixes:

  • Fix batched requests hanging after cancellation (#​50054)
  • Properly propagate backpressure error (#​50311)
RLlib

🎉 New Features:

  • Added env vectorization support for multi-agent (new API stack). (#​50437)

💫 Enhancements:

🔨 Fixes:

  • Fix SPOT preemption tolerance for large AlgorithmConfig: Pass by reference to RolloutWorker (#​50688)
  • on_workers/env_runners_recreated callback would be called twice. (#​50172)
  • default_resource_request: aggregator actors missing in placement group for local Learner. (#​50219, #​50475)

📖 Documentation:

  • Docs re-do (new API stack):
Ray Core and Ray Clusters
Ray Core

💫 Enhancements:

  • [Core] Enable users to configure python standard log attributes for structured logging (#​49871)
  • [Core] Prestart worker with runtime env (#​49994)
  • [compiled graphs] Support experimental_compile(_default_communicator=comm) (#​50023)
  • [Core] ray.util.Queue Empty and Full exceptions extend queue.Empty and Full (#​50261)
  • [Core] Initial port of Ray to Python 3.13 (#​47984)

🔨 Fixes:

  • [Core] Ignore stale ReportWorkerBacklogRequest (#​50280)
  • [Core] Fix check failure due to negative available resource (#​50517)
Ray Clusters

📖 Documentation:

  • Update the KubeRay docs to v1.3.0.
Ray Dashboard

🎉 New Features:

  • Additional filters for job list page (#​50283)
Thanks

Thank you to everyone who contributed to this release! 🥳
@​liuxsh9, @​justinrmiller, @​CheyuWu, @​400Ping, @​scottsun94, @​bveeramani, @​bhmiller, @​tylerfreckmann, @​hefeiyun, @​pcmoritz, @​matthewdeng, @​dentiny, @​erictang000, @​gvspraveen, @​simonsays1980, @​aslonnie, @​shorbaji, @​LeoLiao123, @​justinvyu, @​israbbani, @​zcin, @​ruisearch42, @​khluu, @​kouroshHakha, @​sijieamoy, @​SergeCroise, @​raulchen, @​anson627, @​bluenote10, @​allenyin55, @​martinbomio, @​rueian, @​rynewang, @​owenowenisme, @​Betula-L, @​alexeykudinkin, @​crypdick, @​jujipotle, @​saihaj, @​EricWiener, @​kevin85421, @​MengjinYan, @​chris-ray-zhang, @​SumanthRH, @​chiayi, @​comaniac, @​angelinalg, @​kenchung285, @​tanmaychimurkar, @​andrewsykim, @​MortalHappiness, @​sven1977, @​richardliaw, @​omatthew98, @​fscnick, @​akyang-anyscale, @​cristianjd, @​Jay-ju, @​spencer-p, @​win5923, @​wxsms, @​stfp, @​letaoj, @​JDarDagran, @​jjyao, @​srinathk10, @​edoakes, @​vincent0426, @​dayshah, @​davidxia, @​DmitriGekhtman, @​GeneDer, @​HYLcool, @​gameofby, @​can-anyscale, @​ryanaoleary, @​eddyxu

v2.42.1

Compare Source

Ray Data

🔨 Fixes:

v2.42.0

Compare Source

Ray Libraries
Ray Data

🎉 New Features:

  • Added read_audio and read_video (#​50016)

💫 Enhancements:

  • Optimized multi-column groupbys (#​45667)
  • Included Ray user-agent in BigQuery client construction (#​49922)

🔨 Fixes:

  • Fixed bug that made read tasks non-deterministic (#​49897)

🗑️ Deprecations:

  • Deprecated num_rows_per_file in favor of min_rows_per_file (#​49978)
Ray Train

💫 Enhancements:

  • Add Train v2 user-facing callback interface (#​49819)
  • Add TuneReportCallback for propagating intermediate Train results to Tune (#​49927)
Ray Tune

📖 Documentation:

Ray Serve

💫 Enhancements:

  • Cache metrics in replica and report on an interval (#​49971)
  • Cache expensive calls to inspect.signature (#​49975)
  • Remove extra pickle serialization for gRPCRequest (#​49943)
  • Shared LongPollClient for Routers (#​48807)
  • DeploymentHandle API is now stable (#​49840)

🔨 Fixes:

  • Fix batched requests hanging after request cancellation bug (#​50054)
RLlib

💫 Enhancements:

  • Add metrics to replay buffers. (#​49822)
  • Enhance node-failure tolerance (new API stack). (#​50007)
  • MetricsLogger cleanup throughput logic. (#​49981)
  • Split AddStates... connectors into 2 connector pieces (AddTimeDimToBatchAndZeroPad and AddStatesFromEpisodesToBatch) (#​49835)

🔨 Fixes:

  • Old API stack IMPALA/APPO: Re-introduce mixin-replay-buffer pass, even if replay-ratio=0 (fixes a memory leak). (#​49964)
  • Fix MetricsLogger race conditions. (#​49888)
  • APPO/IMPALA: Bug fix for > 1 Learner actor. (#​49849)

📖 Documentation:

  • New MetricsLogger API rst page. (#​49538)
  • Move "new API stack" info box right below page titles for better visibility. (#​49921)
  • Add example script for how to log custom metrics in training_step(). (#​49976)
  • Enhance/redo autoregressive action distribution example. (#​49967)
  • Make the "tiny CNN" example RLModule run with APPO (by implementing TargetNetAPI) (#​49825)
Ray Core and Ray Clusters
Ray Core

💫 Enhancements:

  • Only get single node info rather then all when needed (#​49727)
  • Introduce with_tensor_transport API (#​49753)

🔨 Fixes:

Ray Clusters

🔨 Fixes:

  • Fix token expiration for ray autoscaler (#​48481)
Thanks

Thank you to everyone who contributed to this release! 🥳
@​wingkitlee0, @​saihaj, @​win5923, @​justinvyu, @​kevin85421, @​edoakes, @​cristianjd, @​rynewang, @​richardliaw, @​LeoLiao123, @​alexeykudinkin, @​simonsays1980, @​aslonnie, @​ruisearch42, @​pcmoritz, @​fscnick, @​bveeramani, @​mattip, @​till-m, @​tswast, @​ujjawal-khare, @​wadhah101, @​nikitavemuri, @​akshay-anyscale, @​srinathk10, @​zcin, @​dayshah, @​dentiny, @​LydiaXwQ, @​matthewdeng, @​JoshKarpel, @​MortalHappiness, @​sven1977, @​omatthew98

v2.41.0

Compare Source

Highlights
  • Major update of RLlib docs and example scripts for the new API stack.
Ray Libraries
Ray Data

🎉 New Features:

  • Expression support for filters (#​49016)
  • Support partition_cols in write_parquet (#​49411)
  • Feature: implement multi-directional sort over Ray Data datasets (#​49281)

💫 Enhancements:

  • Use dask 2022.10.2 (#​48898)
  • Clarify schema validation error (#​48882)
  • Raise ValueError when the data sort key is None (#​48969)
  • Provide more messages when webdataset format is error (#​48643)
  • Upgrade Arrow version from 17 to 18 (#​48448)
  • Update hudi version to 0.2.0 (#​48875)
  • webdataset: expand JSON objects into individual samples (#​48673)
  • Support passing kwargs to map tasks. (#​49208)
  • Add ExecutionCallback interface (#​49205)
  • Add seed for read files (#​49129)
  • Make select_columns and rename_columns use Project operator (#​49393)

🔨 Fixes:

  • Fix partial function name parsing in map_groups (#​48907)
  • Always launch one task for read_sql (#​48923)
  • Reimplement of fix memory pandas (#​48970)
  • webdataset: flatten return args (#​48674)
  • Handle numpy > 2.0.0 behaviour in _create_possibly_ragged_ndarray (#​48064)
  • Fix DataContext sealing for multiple datasets. (#​49096)
  • Fix to_tf for List types (#​49139)
  • Fix type mismatch error while mapping nullable column (#​49405)
  • Datasink: support passing write results to on_write_completes (#​49251)
  • Fix groupby hang when value contains np.nan (#​49420)
  • Fix bug where file_extensions doesn't work with compound extensions (#​49244)
  • Fix map operator fusion when concurrency is set (#​49573)
Ray Train

🎉 New Features:

  • Output JSON structured log files for system and application logs (#​49414)
  • Add support for AMD ROCR_VISIBLE_DEVICES (#​49346)

💫 Enhancements:

🏗 Architecture refactoring:

  • LightGBM: Rewrite get_network_params implementation (#​49019)
Ray Tune

🎉 New Features:

  • Update optuna_search to allow users to configure optuna storage (#​48547)

🏗 Architecture refactoring:

Ray Serve

💫 Enhancements:

  • Improved request_id generation to reduce proxy CPU overhead (#​49537)
  • Tune GC threshold by default in proxy (#​49720)
  • Use pickle.dumps for faster serialization from proxy to replica (#​49539)

🔨 Fixes:

  • Handle nested ‘=’ in serve run arguments (#​49719)
  • Fix bug when ray.init() is called multiple times with different runtime_envs (#​49074)

🗑️ Deprecations:

  • Adds a warning that the default behavior for sync methods will change in a future release. They will be run in a threadpool by default. You can opt into this behavior early by setting RAY_SERVE_RUN_SYNC_IN_THREADPOOL=1. (#​48897)
RLlib

🎉 New Features:

  • Add support for external Envs to new API stack: New example script and custom tcp-capable EnvRunner. (#​49033)

💫 Enhancements:

  • Offline RL:
  • APPO/IMPALA acceleration (new API stack):
    • Add support for AggregatorActors per Learner. (#​49284)
    • Auto-sleep time AND thread-safety for MetricsLogger. (#​48868)
    • Activate APPO cont. actions release- and CI tests (HalfCheetah-v1 and Pendulum-v1 new in tuned_examples). (#​49068)
    • Add "burn-in" period setting to the training of stateful RLModules. (#​49680)
  • Callbacks API: Add support for individual lambda-style callbacks. (#​49511)
  • Other enhancements: #​49687, #​49714, #​49693, #​49497, #​49800, #​49098

📖 Documentation:

🔨 Fixes:

🏗 Architecture refactoring:

  • RLModule: Introduce Default[algo]RLModule classes (#​49366, #​49368)
  • Remove RLlib dependencies from setup.py; add ormsgpack (#​49489)

🗑️ Deprecations:

Ray Core and Ray Clusters
Ray Core

💫 Enhancements:

  • Add task_name, task_function_name and actor_name in Structured Logging (#​48703)
  • Support redis/valkey authentication with username (#​48225)
  • Add v6e TPU Head Resource Autoscaling Support (#​48201)
  • compiled graphs: Support all driver and actor read combinations (#​48963)
  • compiled graphs: Add ascii based CG visualization (#​48315)
  • compiled graphs: Add ray[cg] pip install option (#​49220)
  • Allow uv cache at installation (#​49176)
  • Support != Filter in GCS for Task State API (#​48983)
  • compiled graphs: Add CPU-based NCCL communicator for development (#​48440)
  • Support gcs and raylet log rotation (#​48952)
  • compiled graphs: Support nsight.nvtx profiling (#​49392)

🔨 Fixes:

  • autoscaler: Health check logs are not visible in the autoscaler container's stdout (#​48905)
  • Only publish WORKER_OBJECT_EVICTION when the object is out of scope or manually freed (#​47990)
  • autoscaler: Autoscaler doesn't scale up correctly when the KubeRay RayCluster is not in the goal state (#​48909)
  • autoscaler: Fix incorrectly terminating nodes misclassified as idle in autoscaler v1 (#​48519)
  • compiled graphs: Fix the missing dependencies when num_returns is used (#​49118)
  • autoscaler: Fuse scaling requests together to avoid overloading the Kubernetes API server (#​49150)
  • Fix bug to support S3 pre-signed url for .whl file (#​48560)
  • Fix data race on gRPC client context (#​49475)
  • Make sure draining node is not selected for scheduling (#​49517)
Ray Clusters

💫 Enhancements:

  • Azure: Enable accelerated networking as a flag in azure vms (#​47988)

📖 Documentation:

  • Kuberay: Logging: Add Fluent Bit DaemonSet and Grafana Loki to "Persist KubeRay Operator Logs" (#​48725)
  • Kuberay: Logging: Specify the Helm chart version in "Persist KubeRay Operator Logs" (#​48937)

Dashboard

💫 Enhancements:

  • Add instance variable to many default dashboard graphs (#​49174)
  • Display duration in milliseconds if under 1 second. (#​49126)
  • Add RAY_PROMETHEUS_HEADERS env for carrying additional headers to Prometheus (#​49353)
  • Document about the RAY_PROMETHEUS_HEADERS env for carrying additional headers to Prometheus (#​49700)

🏗 Architecture refactoring:

  • Move memray dependency from default to observability (#​47763)
  • Move StateHead's methods into free functions. (#​49388)
Thanks

@​raulchen, @​alanwguo, @​omatthew98, @​xingyu-long, @​tlinkin, @​yantzu, @​alexeykudinkin, @​andrewsykim, @​win5923, @​csy1204, @​dayshah, @​richardliaw, @​stephanie-wang, @​gueraf, @​rueian, @​davidxia, @​fscnick, @​wingkitlee0, @​KPostOffice, @​GeneDer, @​MengjinYan, @​simonsays1980, @​pcmoritz, @​petern48, @​kashiwachen, @​pfldy2850, @​zcin, @​scottjlee, @​Akhil-CM, @​Jay-ju, @​JoshKarpel, @​edoakes, @​ruisearch42, @​gorloffslava, @​jimmyxie-figma, @​bthananjeyan, @​sven1977, @​bnorick, @​jeffreyjeffreywang, @​ravi-dalal, @​matthewdeng, @​angelinalg, @​ivanthewebber, @​rkooo567, @​srinathk10, @​maresb, @​gvspraveen, @​akyang-anyscale, @​mimiliaogo, @​bveeramani, @​ryanaoleary, @​kevin85421, @​richardsliu, @​hartikainen, @​coltwood93, @​mattip, @​Superskyyy, @​justinvyu, @​hongpeng-guo, @​ArturNiederfahrenhorst, @​jecsand838, @​Bye-legumes, @​hcc429, @​WeichenXu123, @​martinbomio, @​HollowMan6, @​MortalHappiness, @​dentiny, @​zhe-thoughts, @​anyadontfly, @​smanolloff, @​richo-anyscale, @​khluu, @​xushiyan, @​rynewang, @​japneet-anyscale, @​jjyao, @​sumanthratna, @​saihaj, @​aslonnie

Many thanks to all those who contributed to this release!

v2.40.0

Compare Source

Ray Libraries
Ray Data

🎉 New Features:

💫 Enhancements:

  • Improved performance of DelegatingBlockBuilder (#​48509)
  • Improved memory accounting of pandas blocks (#​46939)

🔨 Fixes:

  • Fixed bug where you can’t specify a schema with write_parquet (#​48630)
  • Fixed bug where to_pandas errors if your dataset contains Arrow and pandas blocks (#​48583)
  • Fixed bug where map_groups doesn’t work with pandas data (#​48287)
  • Fixed bug where write_parquet errors if your data contains nullable fields (#​48478)
  • Fixed bug where “Iteration Blocked Time” charts looks incorrect (#​48618)
  • Fixed bug where unique fails with null values (#​48750)
  • Fixed bug where “Rows Outputted” is 0 in the Data dashboard (#​48745)
  • Fixed bug where methods like drop_columns cause spilling (#​48140)
  • Fixed bug where async map tasks hang (#​48861)

🗑️ Deprecations:

Ray Train

🔨 Fixes:

  • Fix StartTracebackWithWorkerRank serialization (#​48548)

📖 Documentation:

  • Add example for fine-tuning Llama3.1 with AWS Trainium (#​48768)
Ray Tune

🔨 Fixes:

  • Remove the clear_checkpoint function during Trial restoration error handling. (#​48532)
Ray Serve

🎉 New Features:

  • Initial version of local_testing_mode (#​48477)

💫 Enhancements:

  • Handle multiple changed objects per LongPollHost.listen_for_change RPC (#​48803)
  • Add more nuanced checks for http proxy status errors (#​47896)
  • Improve replica access log messages to include HTTP status info and better resemble standard log format (#​48819)
  • Propagate replica constructor error to deployment status message and print num retries left (#​48531)

🔨 Fixes:

  • Pending requests that are cancelled before they were assigned to a replica now also return a serve.RequestCancelledError (#​48496)
RLlib

💫 Enhancements:

  • Release test enhancements. (#​45803, #​48681)
  • Make opencv-python-headless default over opencv-python (#​48776)
  • Reverse learner queue behavior of IMPALA/APPO (consume oldest batches first, instead of newest, BUT drop oldest batches if queue full). (#​48702)

🔨 Fixes:

📖 Documentation:

  • Upgrade examples script overview page (new API stack). (#​48526)
  • Enable RLlib + Serve example in CI and translate to new API stack. (#​48687)

🏗 Architecture refactoring:

Ray Core and Ray Clusters
Ray Core

🎉 New Features:

💫 Enhancements:

  • [CompiledGraphs] Refine schedule visualization (#​48594)

🔨 Fixes:

  • [CompiledGraphs] Don't persist input_nodes in _CollectiveOperation to avoid wrong understanding about DAGs (#​48463)
  • [Core] Fix Ascend NPU discovery to support 8+ cards per node (#​48543)
  • [Core] Make Placement Group Wildcard and Indexed Resource Assignments Consistent (#​48088)
  • [Core] Stop the GRPC server before Shut down the Object Store (#​48572)
Ray Clusters

🔨 Fixes:

  • [KubeRay]: Fix ConnectionError on Autosc

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants