diff --git a/docs/features/intelligent-context-condensing.mdx b/docs/features/intelligent-context-condensing.mdx
index a06998e4..8e9924b3 100644
--- a/docs/features/intelligent-context-condensing.mdx
+++ b/docs/features/intelligent-context-condensing.mdx
@@ -120,7 +120,7 @@ When Roo Code encounters context window limit errors, it now automatically recov
#### How error recovery works
-1. **Error Detection**: Roo Code detects context window errors from multiple providers (OpenAI, Anthropic, Cerebras, and others)
+1. **Error Detection**: Roo Code detects context window errors from multiple providers (OpenAI, Anthropic, and others)
2. **Automatic Truncation**: The system automatically reduces the context by 25%
3. **Retry Mechanism**: After truncation, Roo Code retries your request (up to the built-in retry limit)
4. **Continuation**: Roo retries without manual intervention
diff --git a/docs/providers/cerebras.md b/docs/providers/cerebras.md
deleted file mode 100644
index 37eb9941..00000000
--- a/docs/providers/cerebras.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-sidebar_label: Cerebras
-description: Configure Cerebras AI's ultra-fast inference models in Roo Code. Access free and paid tiers with speeds up to 2600 tokens/second for coding and reasoning tasks.
-keywords:
- - cerebras
- - cerebras ai
- - roo code
- - api provider
- - fast inference
- - qwen coder
- - llama models
- - free tier
- - high speed ai
----
-
-# Using Cerebras With Roo Code
-
-Cerebras AI specializes in extremely fast inference speeds (up to 2600 tokens/second) with competitive pricing, including a free tier. Their models are optimized for coding, general intelligence, and reasoning tasks.
-
-**Website:** [https://cloud.cerebras.ai/](https://cloud.cerebras.ai/)
-
----
-
-## Getting an API Key
-
-1. **Sign Up/Sign In:** Go to [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode). Create an account or sign in.
-2. **Navigate to API Keys:** Access the API keys section in your dashboard.
-3. **Create a Key:** Generate a new API key. Give it a descriptive name (e.g., "Roo Code").
-4. **Copy the Key:** **Important:** Copy the API key immediately. Store it securely.
-
----
-
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Cerebras" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Cerebras API key into the "Cerebras API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from Cerebras AI's API.
-
-For the complete, up-to-date model list and pricing, see [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode).
-
----
-
-## Tips and Notes
-
-* **Performance:** Cerebras specializes in extremely fast inference speeds, making it ideal for real-time coding assistance.
-* **Pricing:** Check the [Cerebras Cloud](https://cloud.cerebras.ai?utm_source=roocode) dashboard for current pricing and free tier details.
\ No newline at end of file
diff --git a/docs/providers/chutes.md b/docs/providers/chutes.md
deleted file mode 100644
index 14e3381f..00000000
--- a/docs/providers/chutes.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-sidebar_label: Chutes AI
-description: Configure Chutes AI with Roo Code for free access to various large language models. Get started with open-source and proprietary AI models.
-keywords:
- - chutes ai
- - free llm
- - roo code
- - api provider
- - free ai models
- - language models
- - llm api
- - open source models
----
-
-# Using Chutes AI With Roo Code
-
-Chutes.ai offers free API access to several large language models (LLMs), allowing developers to integrate and experiment with these models without immediate financial commitment. They provide access to a curated set of open-source and proprietary language models, often with a focus on specific capabilities or regional language support.
-
-**Website:** [https://chutes.ai/](https://chutes.ai/)
-
----
-
-## Getting an API Key
-
-To use Chutes AI with Roo Code, obtain an API key from the [Chutes AI platform](https://chutes.ai/). After signing up or logging in, you should find an option to generate or retrieve your API key within your account dashboard or settings.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from Chutes AI's API.
-
-For the complete, up-to-date model list, see [Chutes AI's platform](https://chutes.ai/) or your account dashboard.
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Chutes AI" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Chutes AI API key into the "Chutes AI API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
diff --git a/docs/providers/deepinfra.md b/docs/providers/deepinfra.md
deleted file mode 100644
index d7c07a9c..00000000
--- a/docs/providers/deepinfra.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-sidebar_label: DeepInfra
-description: Configure DeepInfra's high-performance AI models in Roo Code. Access Qwen Coder, Llama, and other open-source models with prompt caching and vision capabilities.
-keywords:
- - deepinfra
- - deep infra
- - roo code
- - api provider
- - qwen coder
- - llama models
- - prompt caching
- - vision models
- - open source ai
----
-
-# Using DeepInfra With Roo Code
-
-DeepInfra provides cost-effective access to high-performance open-source models with features like prompt caching, vision support, and specialized coding models. Their infrastructure offers low latency and automatic load balancing across global edge locations.
-
-**Website:** [https://deepinfra.com/](https://deepinfra.com/)
-
----
-
-## Getting an API Key
-
-1. **Sign Up/Sign In:** Go to [DeepInfra](https://deepinfra.com/). Create an account or sign in.
-2. **Navigate to API Keys:** Access the API keys section in your dashboard.
-3. **Create a Key:** Generate a new API key. Give it a descriptive name (e.g., "Roo Code").
-4. **Copy the Key:** **Important:** Copy the API key immediately. Store it securely.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from DeepInfra's API.
-
-For the complete, up-to-date model catalog, see [DeepInfra's models page](https://deepinfra.com/models).
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "DeepInfra" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your DeepInfra API key into the "DeepInfra API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
\ No newline at end of file
diff --git a/docs/providers/doubao.md b/docs/providers/doubao.md
deleted file mode 100644
index fdb2e8a0..00000000
--- a/docs/providers/doubao.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-sidebar_label: Doubao
-description: Configure ByteDance's Doubao AI models in Roo Code. Access competitive language models with full integration and internationalized support.
-keywords:
- - doubao
- - bytedance
- - bytedance ai
- - roo code
- - api provider
- - doubao models
- - chinese ai
- - language models
----
-
-# Using Doubao With Roo Code
-
-Doubao is ByteDance's Chinese AI service, offering competitive language models for various development tasks. The provider includes full API integration with embedding support and internationalized prompts.
-
-**Website:** [https://www.volcengine.com/](https://www.volcengine.com/)
-
----
-
-## Getting an API Key
-
-1. **Sign Up/Sign In:** Visit the [Volcano Engine Console](https://console.volcengine.com/). Create an account or sign in.
-2. **Navigate to Model Service:** Access the AI model service section in the console.
-3. **Create API Key:** Generate a new API key for the Doubao service.
-4. **Copy the Key:** **Important:** Copy the API key immediately and store it securely. You may not be able to view it again.
-
----
-
-## Available Models
-
-Roo Code supports all Doubao models available through ByteDance's Volcano Engine API.
-
-For the complete, up-to-date model list, see [Volcano Engine's AI model service](https://www.volcengine.com/).
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Doubao" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Doubao API key into the "Doubao API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
-
-**Note:** Doubao uses the base URL `https://ark.cn-beijing.volces.com/api/v3` and servers are located in Beijing, China.
\ No newline at end of file
diff --git a/docs/providers/featherless.md b/docs/providers/featherless.md
deleted file mode 100644
index d3d00bd7..00000000
--- a/docs/providers/featherless.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-sidebar_label: Featherless AI
-description: Configure Featherless AI's open-source models in Roo Code. Access free DeepSeek, Qwen, and other high-performance models through an OpenAI-compatible API.
-keywords:
- - featherless
- - featherless ai
- - roo code
- - api provider
- - deepseek
- - qwen
- - free models
- - open source ai
- - reasoning models
- - kimi k2
----
-
-# Using Featherless AI With Roo Code
-
-Featherless AI provides access to high-performance open-source models including DeepSeek, Qwen, and other large language models. All models are currently free to use, making it an excellent choice for budget-conscious developers.
-
-**Website:** [https://featherless.ai](https://featherless.ai)
-
----
-
-## Getting an API Key
-
-1. **Sign Up/Sign In:** Go to [Featherless AI](https://featherless.ai). Create an account or sign in.
-2. **Navigate to API Keys:** Access the [API keys page](https://featherless.ai/account/api-keys) in your account.
-3. **Create a Key:** Generate a new API key. Give it a descriptive name (e.g., "Roo Code").
-4. **Copy the Key:** **Important:** Copy the API key immediately. It will only be shown once. Store it securely.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from Featherless AI's API.
-
-For the complete, up-to-date model list, see [Featherless AI](https://featherless.ai).
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Featherless AI" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Featherless API key into the "Featherless API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
\ No newline at end of file
diff --git a/docs/providers/groq.md b/docs/providers/groq.md
deleted file mode 100644
index d0ad142f..00000000
--- a/docs/providers/groq.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-sidebar_label: Groq
-description: Configure Groq's high-speed LPU inference in Roo Code. Access Llama, Mixtral, and other models with significantly faster response times.
-keywords:
- - groq
- - groq cloud
- - roo code
- - api provider
- - lpu
- - fast inference
- - llama models
- - mixtral
- - high speed ai
----
-
-# Using Groq With Roo Code
-
-Groq specializes in providing very high-speed inference for large language models, utilizing their custom-built Language Processing Units (LPUs). This can result in significantly faster response times for supported models.
-
-**Website:** [https://groq.com/](https://groq.com/)
-
----
-
-## Getting an API Key
-
-To use Groq with Roo Code, you'll need an API key from the [GroqCloud Console](https://console.groq.com/). After signing up or logging in, navigate to the API Keys section of your dashboard to create and copy your key.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from the Groq API.
-
-For the complete, up-to-date model list and capabilities, see [Groq's models documentation](https://console.groq.com/docs/models).
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Groq" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Groq API key into the "Groq API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
diff --git a/docs/providers/huggingface.md b/docs/providers/huggingface.md
deleted file mode 100644
index b89b5ee8..00000000
--- a/docs/providers/huggingface.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-sidebar_label: Hugging Face
-description: Connect Roo Code to Hugging Face's inference router for access to open-source LLMs. Choose from multiple inference providers and models like Llama, Mistral, and more.
-keywords:
- - hugging face
- - huggingface
- - roo code
- - api provider
- - open source models
- - llama
- - mistral
- - inference router
- - ai models
- - inference providers
----
-
-# Using Hugging Face With Roo Code
-
-Roo Code integrates with the Hugging Face router to provide access to a curated collection of open-source models optimized for code assistance. The integration allows you to choose from multiple inference providers and automatically selects the best available option.
-
-**Website:** [https://huggingface.co/](https://huggingface.co/)
-
----
-
-## Getting an API Key
-
-1. **Sign Up/Sign In:** Go to [Hugging Face](https://huggingface.co/) and create an account or sign in.
-2. **Navigate to Settings:** Click on your profile picture and select "Settings".
-3. **Access Tokens:** Go to the "Access Tokens" section in your settings.
-4. **Create Token:** Click "New token" and give it a descriptive name (e.g., "Roo Code").
-5. **Set Permissions:** Select "Read" permissions (this is sufficient for Roo Code).
-6. **Copy Token:** **Important:** Copy the token immediately. Store it securely.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all available models from the curated 'roocode' collection on Hugging Face.
-
-For the complete, up-to-date model collection, see [Hugging Face's roocode collection](https://huggingface.co/collections/roocode).
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Hugging Face" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Hugging Face API token into the "Hugging Face API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown. The dropdown shows the model count and is searchable.
-5. **Choose Inference Provider (Optional):** Select a specific inference provider from the dropdown, or leave it on "Auto" (default) to automatically select the best available provider.
-
----
-
-## Inference Provider Selection
-
-Hugging Face's router connects to multiple inference providers. You can either:
-
-- **Auto Mode (Default):** Automatically selects the best available provider based on model availability and performance
-- **Manual Selection:** Choose a specific provider from the dropdown
-
-The dropdown displays the status of each provider:
-- `live` - Provider is operational and available
-- `staging` - Provider is in testing phase
-- `error` - Provider is currently experiencing issues
-
-Provider names are formatted for better readability in the UI (e.g., "sambanova" appears as "SambaNova").
-
-When you select a specific provider, the model capabilities (max tokens, pricing) will update to reflect that provider's specific configuration. Pricing information is only displayed when a specific provider is selected, not in Auto mode.
-
----
-
-## Model Information Display
-
-For each selected model, Roo Code displays:
-
-- **Max Output:** The maximum number of tokens the model can generate (varies by provider)
-- **Pricing:** Cost per million input and output tokens (displayed only when a specific provider is selected)
-- **Image Support:** Currently, all models are shown as text-only. This is a Roo Code implementation limitation, not a restriction of the Hugging Face API.
-
----
-
-## Available Providers
-
-The list of available providers is dynamic and retrieved from the Hugging Face API. Common providers include:
-
-- **Together AI** - High-performance inference platform
-- **Fireworks AI** - Fast and scalable model serving
-- **DeepInfra** - Cost-effective GPU infrastructure
-- **Hyperbolic** - Optimized inference service
-- **Cerebras** - Hardware-accelerated inference
-
-*Note: The providers shown above are examples of commonly available options. The actual list may vary.*
-
----
-
-## Tips and Notes
-
-- **Provider Failover:** When using Auto mode, if the selected provider fails, Hugging Face's infrastructure will automatically try alternative providers
-- **Rate Limits:** Different providers may have different rate limits and availability
-- **Pricing Variability:** Costs can vary significantly between providers for the same model
-- **Model Updates:** The roocode collection is regularly updated with new and improved models
\ No newline at end of file
diff --git a/docs/providers/index.json b/docs/providers/index.json
index 5ac59f63..15eef9f1 100644
--- a/docs/providers/index.json
+++ b/docs/providers/index.json
@@ -18,48 +18,18 @@
"extension": true,
"cloud": false
},
- {
- "id": "providers/cerebras",
- "title": "Cerebras",
- "extension": true,
- "cloud": false
- },
- {
- "id": "providers/deepinfra",
- "title": "DeepInfra",
- "extension": true,
- "cloud": false
- },
{
"id": "providers/deepseek",
"title": "DeepSeek",
"extension": true,
"cloud": false
},
- {
- "id": "providers/doubao",
- "title": "Doubao",
- "extension": true,
- "cloud": false
- },
- {
- "id": "providers/featherless",
- "title": "Featherless AI",
- "extension": true,
- "cloud": false
- },
{
"id": "providers/fireworks",
"title": "Fireworks AI",
"extension": true,
"cloud": false
},
- {
- "id": "providers/chutes",
- "title": "Chutes AI",
- "extension": true,
- "cloud": false
- },
{
"id": "providers/gemini",
"title": "Google Gemini",
@@ -72,24 +42,6 @@
"extension": true,
"cloud": false
},
- {
- "id": "providers/groq",
- "title": "Groq",
- "extension": true,
- "cloud": false
- },
- {
- "id": "providers/huggingface",
- "title": "Hugging Face",
- "extension": true,
- "cloud": false
- },
- {
- "id": "providers/io-intelligence",
- "title": "IO Intelligence",
- "extension": true,
- "cloud": false
- },
{
"id": "providers/lmstudio",
"title": "LM Studio",
@@ -156,12 +108,6 @@
"extension": true,
"cloud": false
},
- {
- "id": "providers/unbound",
- "title": "Unbound",
- "extension": true,
- "cloud": false
- },
{
"id": "providers/vercel-ai-gateway",
"title": "Vercel AI Gateway",
diff --git a/docs/providers/io-intelligence.md b/docs/providers/io-intelligence.md
deleted file mode 100644
index 00972ea3..00000000
--- a/docs/providers/io-intelligence.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-description: This page explains how to configure and use the IO Intelligence provider with Roo Code.
-keywords:
- - io intelligence
- - provider
- - ai models
- - llama
- - deepseek
- - qwen
- - mistral
-sidebar_label: IO Intelligence
----
-
-# IO Intelligence Provider
-
-The IO Intelligence provider gives you access to a wide range of AI models, including those from Llama, DeepSeek, Qwen, and Mistral, through a unified API.
-
-## Configuration
-
-To use the IO Intelligence provider, you will need to add it to your `~/.roo/config.json` file.
-
-1. **Get your API key**: You can get an API key from the [IO Intelligence website](https://io.net/).
-2. **Add the provider to your config**: Add the following to your `config.json` file:
-
-```json
-{
- "providers": [
- {
- "id": "io-intelligence",
- "apiKey": "YOUR_IO_INTELLIGENCE_API_KEY"
- }
- ]
-}
-```
-
-## Available Models
-
-The IO Intelligence provider supports multiple AI models including Llama, DeepSeek, Qwen, and Mistral.
-
-For the current model list and specifications, see [IO Intelligence's documentation](https://io.net/).
-
-Models can be specified in your API configuration profiles in [`~/.roo/config.json`](#configuration).
\ No newline at end of file
diff --git a/docs/providers/unbound.md b/docs/providers/unbound.md
deleted file mode 100644
index da05cf9d..00000000
--- a/docs/providers/unbound.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-description: Configure Unbound in Roo Code for secure access to multiple LLMs through a single API. Enterprise-grade security and compliance features.
-keywords:
- - Unbound
- - Roo Code
- - LLM gateway
- - enterprise AI
- - secure AI
- - API provider
- - Anthropic
- - OpenAI
- - compliance
-sidebar_label: Unbound
----
-
-# Using Unbound With Roo Code
-
-Roo Code supports accessing models through [Unbound](https://getunbound.ai/), a platform that focuses on providing secure and reliable access to a variety of large language models (LLMs). Unbound acts as a gateway, allowing you to use models from providers like Anthropic and OpenAI without needing to manage multiple API keys and configurations directly. They emphasize security and compliance features for enterprise use.
-
-**Website:** [https://getunbound.ai/](https://getunbound.ai/)
-
----
-
-## Creating an Account
-
-1. **Sign Up/Sign In:** Go to the [Unbound gateway](https://gateway.getunbound.ai). Create an account or sign in.
-2. **Create an Application:** Go to the [Applications](https://gateway.getunbound.ai/ai-gateway-applications) page and hit the "Create Application" button.
-3. **Copy the API Key:** Copy the API key to your clipboard.
-
----
-
-## Available Models
-
-Roo Code automatically fetches all models configured in your Unbound application.
-
-Configure your allowed models in the [Unbound Applications dashboard](https://gateway.getunbound.ai/ai-gateway-applications), then Roo Code will display them in the model dropdown.
-
----
-
-## Configuration in Roo Code
-
-1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel.
-2. **Select Provider:** Choose "Unbound" from the "API Provider" dropdown.
-3. **Enter API Key:** Paste your Unbound API key into the "Unbound API Key" field.
-4. **Select Model:** Choose your desired model from the "Model" dropdown.
-
----
-
-## Tips and Notes
-
-* **Security Focus:** Unbound emphasizes security features for enterprise use. If your organization has strict security requirements for AI usage, Unbound might be a good option.
-* **Model List Refresh:** Roo Code includes a refresh button specifically for the Unbound provider in the settings. This allows you to easily update the list of available models from your Unbound application and get immediate feedback on your API key's validity.
diff --git a/docs/update-notes/v3.16.0.md b/docs/update-notes/v3.16.0.md
index eecbd8c4..717e976d 100644
--- a/docs/update-notes/v3.16.0.md
+++ b/docs/update-notes/v3.16.0.md
@@ -17,7 +17,7 @@ keywords:
*Release notes for Roo Code v3.16.0, published on 2025-05-06.*
-This release introduces vertical tab navigation for settings, new API providers ([Groq](/providers/groq) and [Chutes AI](/providers/chutes)), clickable code references, and numerous UI/UX enhancements, alongside various bug fixes and miscellaneous improvements.
+This release introduces vertical tab navigation for settings, new API providers (Groq and Chutes AI), clickable code references, and numerous UI/UX enhancements, alongside various bug fixes and miscellaneous improvements.
---
@@ -73,12 +73,12 @@ General UI improvements for a more consistent, visually appealing, and intuitive
---
## New Provider: Groq Integration (thanks @shariqriazz!)
-You can now connect to [Groq](/providers/groq) and utilize their high-speed language models directly within the extension.
+You can now connect to Groq and utilize their high-speed language models directly within the extension.
---
## New Provider: Chutes AI Integration (thanks @shariqriazz!)
-Support for [Chutes AI](/providers/chutes) has also been added, allowing you to leverage their specialized AI capabilities.
+Support for Chutes AI has also been added, allowing you to leverage their specialized AI capabilities.
---
diff --git a/docs/update-notes/v3.16.md b/docs/update-notes/v3.16.md
index 844a5679..fb3e5fea 100644
--- a/docs/update-notes/v3.16.md
+++ b/docs/update-notes/v3.16.md
@@ -68,12 +68,12 @@ General UI improvements for a more consistent, visually appealing, and intuitive
---
## New Provider: Groq Integration (thanks shariqriazz!)
-You can now connect to [Groq](/providers/groq) and utilize their high-speed language models directly within the extension.
+You can now connect to Groq and utilize their high-speed language models directly within the extension.
---
## New Provider: Chutes AI Integration (thanks shariqriazz!)
-Support for [Chutes AI](/providers/chutes) has also been added, allowing you to leverage their specialized AI capabilities.
+Support for Chutes AI has also been added, allowing you to leverage their specialized AI capabilities.
---
diff --git a/docs/update-notes/v3.18.0.mdx b/docs/update-notes/v3.18.0.mdx
index c940f244..7e760c6b 100644
--- a/docs/update-notes/v3.18.0.mdx
+++ b/docs/update-notes/v3.18.0.mdx
@@ -85,8 +85,8 @@ Access the latest `gemini-2.5-flash-preview-05-20` model, including its thinking
* **LM Studio and Ollama Token Tracking**: Token usage is now tracked for [LM Studio](/providers/lmstudio) and [Ollama](/providers/ollama) providers. (thanks xyOz-dev!)
* **LM Studio Reasoning Support**: Added support for parsing "think" tags in [LM Studio](/providers/lmstudio) responses for enhanced transparency into the AI's process. (thanks avtc!)
-* **Qwen3 Model Series for Chutes**: Added new Qwen3 models to the [Chutes provider](/providers/chutes) (e.g., `Qwen/Qwen3-235B-A22B`). (thanks zeozeozeo!)
-* **Unbound Provider Model Refresh**: Added a refresh button for [Unbound](/providers/unbound) models to easily update the list of available models and get immediate feedback on API key validity. (thanks pugazhendhi-m!)
+* **Qwen3 Model Series for Chutes**: Added new Qwen3 models to the Chutes provider (e.g., `Qwen/Qwen3-235B-A22B`). (thanks zeozeozeo!)
+* **Unbound Provider Model Refresh**: Added a refresh button for Unbound models to easily update the list of available models and get immediate feedback on API key validity. (thanks pugazhendhi-m!)
---
diff --git a/docs/update-notes/v3.18.mdx b/docs/update-notes/v3.18.mdx
index 32db8699..6fc08347 100644
--- a/docs/update-notes/v3.18.mdx
+++ b/docs/update-notes/v3.18.mdx
@@ -138,8 +138,8 @@ Access the latest `gemini-2.5-flash-preview-05-20` model, including its thinking
* **LiteLLM Refresh**: Added ability to refresh [`LiteLLM`](/providers/litellm) models list for up-to-date model availability
* **LM Studio and Ollama Token Tracking**: Token usage is now tracked for [LM Studio](/providers/lmstudio) and [Ollama](/providers/ollama) providers. (thanks xyOz-dev!)
* **LM Studio Reasoning Support**: Added support for parsing "think" tags in [LM Studio](/providers/lmstudio) responses for enhanced transparency into the AI's process. (thanks avtc!)
-* **Qwen3 Model Series for Chutes**: Added new Qwen3 models to the [Chutes provider](/providers/chutes) (e.g., `Qwen/Qwen3-235B-A22B`). (thanks zeozeozeo!)
-* **Unbound Provider Model Refresh**: Added a refresh button for [Unbound](/providers/unbound) models to easily update the list of available models and get immediate feedback on API key validity. (thanks pugazhendhi-m!)
+* **Qwen3 Model Series for Chutes**: Added new Qwen3 models to the Chutes provider (e.g., `Qwen/Qwen3-235B-A22B`). (thanks zeozeozeo!)
+* **Unbound Provider Model Refresh**: Added a refresh button for Unbound models to easily update the list of available models and get immediate feedback on API key validity. (thanks pugazhendhi-m!)
* **Requesty Thinking Controls**: Add thinking controls for [Requesty provider](/providers/requesty) (thanks dtrugman!)
* **LiteLLM Metadata**: Improve model metadata for [LiteLLM provider](/providers/litellm)
diff --git a/docs/update-notes/v3.19.0.mdx b/docs/update-notes/v3.19.0.mdx
index 8368287b..063b3aca 100644
--- a/docs/update-notes/v3.19.0.mdx
+++ b/docs/update-notes/v3.19.0.mdx
@@ -51,7 +51,7 @@ Navigate between different modes and prompts more intuitively.
## Provider Updates
-* **DeepSeek R1 0528**: Add DeepSeek R1 0528 model support to [Chutes provider](/providers/chutes) (thanks zeozeozeo!)
+* **DeepSeek R1 0528**: Add DeepSeek R1 0528 model support to Chutes provider (thanks zeozeozeo!)
* **AWS Regions**: Updated AWS regions to include Spain and Hyderabad
---
diff --git a/docs/update-notes/v3.24.0.mdx b/docs/update-notes/v3.24.0.mdx
index 83927990..731c8177 100644
--- a/docs/update-notes/v3.24.0.mdx
+++ b/docs/update-notes/v3.24.0.mdx
@@ -28,7 +28,7 @@ We've added support for Hugging Face as a new provider, bringing access to thous
- **Flexible Integration**: Use models hosted on Hugging Face's infrastructure
- **Easy Configuration**: Simple setup process to get started with your preferred models and providers
-This opens up Roo Code to the entire Hugging Face ecosystem of open source AI models. See our [Hugging Face provider documentation](/providers/huggingface) for setup instructions.
+This opens up Roo Code to the entire Hugging Face ecosystem of open source AI models.
## Diagnostic Controls
diff --git a/docs/update-notes/v3.25.11.mdx b/docs/update-notes/v3.25.11.mdx
index c7d14250..4e1fef21 100644
--- a/docs/update-notes/v3.25.11.mdx
+++ b/docs/update-notes/v3.25.11.mdx
@@ -22,7 +22,7 @@ We've enhanced our GPT-5 integration, enabling you to leverage more advanced cap
We've added IO Intelligence as a new provider, giving you access to a wide range of AI models like Llama, DeepSeek, Qwen, and Mistral through a unified API ([#6875](https://github.com/RooCodeInc/Roo-Code/pull/6875)).
-> **📚 Documentation**: See the [IO Intelligence Provider documentation](/providers/io-intelligence) for more information.
+> **Note**: The IO Intelligence provider has since been retired.
## Codex Mini Model Support
diff --git a/docs/update-notes/v3.25.4.mdx b/docs/update-notes/v3.25.4.mdx
index 86f95e0d..114d9a6d 100644
--- a/docs/update-notes/v3.25.4.mdx
+++ b/docs/update-notes/v3.25.4.mdx
@@ -22,7 +22,7 @@ We've added support for Doubao, ByteDance's AI model provider (thanks AntiMoron!
Doubao expands your AI model options, giving you access to ByteDance's competitive language models alongside existing providers.
-> **📚 Documentation**: See [Doubao Provider Guide](/providers/doubao) for setup instructions and available models.
+> **Note**: The Doubao provider has since been retired.
## SambaNova Provider Integration
diff --git a/docs/update-notes/v3.25.5.mdx b/docs/update-notes/v3.25.5.mdx
index 46a44c00..0b06f76f 100644
--- a/docs/update-notes/v3.25.5.mdx
+++ b/docs/update-notes/v3.25.5.mdx
@@ -23,7 +23,7 @@ We've added support for Cerebras as a new AI provider (thanks kevint-cerebras!)
The Cerebras provider offers competitive performance with flexible pricing tiers, making it an excellent choice for both experimentation and production use.
-> **📚 Documentation**: See [Cerebras Provider Guide](/providers/cerebras) for setup instructions and available models.
+> **Note**: The Cerebras provider has since been retired.
## Auto-approved Cost Limits
diff --git a/docs/update-notes/v3.26.7.mdx b/docs/update-notes/v3.26.7.mdx
index ae84690e..fc8df039 100644
--- a/docs/update-notes/v3.26.7.mdx
+++ b/docs/update-notes/v3.26.7.mdx
@@ -48,7 +48,7 @@ DeepInfra is now available as a model provider (thanks Thachnh!) ([#7677](https:
DeepInfra is an excellent choice for developers looking for variety and value in their AI model selection.
-> **📚 Documentation**: See [DeepInfra Provider Setup](/providers/deepinfra) to get started.
+> **Note**: The DeepInfra provider has since been retired.
## QOL Improvements
diff --git a/docs/update-notes/v3.27.0.mdx b/docs/update-notes/v3.27.0.mdx
index 8f2a7ae7..e6dab1d4 100644
--- a/docs/update-notes/v3.27.0.mdx
+++ b/docs/update-notes/v3.27.0.mdx
@@ -57,4 +57,4 @@ Edit or delete any chat message and quickly recover from mistakes using automati
## Provider Updates
* Chutes: Adds Kimi K2-0905 model with a 256k context window and pricing metadata (thanks pwilkin!) (#[7701](https://github.com/RooCodeInc/Roo-Code/pull/7701))
- > 📚 Documentation: See [Chutes](/providers/chutes)
\ No newline at end of file
+ > Note: The Chutes provider has since been retired.
\ No newline at end of file
diff --git a/docs/update-notes/v3.28.2.mdx b/docs/update-notes/v3.28.2.mdx
index 9166db1a..b41149ce 100644
--- a/docs/update-notes/v3.28.2.mdx
+++ b/docs/update-notes/v3.28.2.mdx
@@ -29,4 +29,4 @@ This release improves the auto-approve UI, adds Qwen3 Next 80B A3B models via th
## Provider Updates
* Add Qwen3 Next 80B A3B models to the chutes provider ([#7948](https://github.com/RooCodeInc/Roo-Code/pull/7948))
- > See [Chutes provider](/providers/chutes) for setup and usage.
\ No newline at end of file
+ > Note: The Chutes provider has since been retired.
\ No newline at end of file
diff --git a/docs/update-notes/v3.28.7.mdx b/docs/update-notes/v3.28.7.mdx
index 7a379486..246d5c9b 100644
--- a/docs/update-notes/v3.28.7.mdx
+++ b/docs/update-notes/v3.28.7.mdx
@@ -26,4 +26,4 @@ One-click Cloud account switching, cleaner conversations with collapsible thinki
## Provider Updates
* Chutes: add `zai-org/GLM-4.5-turbo` model with a 128K context window and competitive pricing (approx. $1/M input, $3/M output), enabling longer prompts with fast inference (thanks mugnimaestra!) ([#8157](https://github.com/RooCodeInc/Roo-Code/pull/8157))
- > See provider setup at [Chutes](/providers/chutes).
\ No newline at end of file
+ > Note: The Chutes provider has since been retired.
\ No newline at end of file
diff --git a/docusaurus.config.ts b/docusaurus.config.ts
index 8e73b539..87a5a7bc 100644
--- a/docusaurus.config.ts
+++ b/docusaurus.config.ts
@@ -318,6 +318,44 @@ const config: Config = {
from: ['/providers/claude-code'],
},
+ // Redirect removed provider pages (removed in v3.47+)
+ {
+ to: '/',
+ from: ['/providers/cerebras'],
+ },
+ {
+ to: '/',
+ from: ['/providers/chutes'],
+ },
+ {
+ to: '/',
+ from: ['/providers/deepinfra'],
+ },
+ {
+ to: '/',
+ from: ['/providers/doubao'],
+ },
+ {
+ to: '/',
+ from: ['/providers/featherless'],
+ },
+ {
+ to: '/',
+ from: ['/providers/groq'],
+ },
+ {
+ to: '/',
+ from: ['/providers/huggingface'],
+ },
+ {
+ to: '/',
+ from: ['/providers/io-intelligence'],
+ },
+ {
+ to: '/',
+ from: ['/providers/unbound'],
+ },
+
// Redirect removed Fast Edits feature page
{
to: '/',