diff --git a/docs/how-tos/vs-code/datacoves-copilot/v2.md b/docs/how-tos/vs-code/datacoves-copilot/v2.md index 532d501..7709f6a 100644 --- a/docs/how-tos/vs-code/datacoves-copilot/v2.md +++ b/docs/how-tos/vs-code/datacoves-copilot/v2.md @@ -111,9 +111,12 @@ Website: https://www.anthropic.com/ Datacoves Copilot supports the following Anthropic Claude models: +- claude-sonnet-4-5 (Recommended - default) +- claude-sonnet-4-20250514 +- claude-opus-4-5-20251101 - claude-opus-4-1-20250805 - claude-opus-4-20250514 -- claude-sonnet-4-20250514 (Recommended) +- claude-haiku-4-5-20251001 - claude-3-7-sonnet-20250219 - claude-3-7-sonnet-20250219:thinking (Extended Thinking variant) - claude-3-5-sonnet-20241022 @@ -155,26 +158,23 @@ Website: https://openai.com/ ### Supported Models (`apiModelId`) -#### GPT-5 Family (Latest) +#### GPT-5.x Family (Latest) -The GPT-5 models are OpenAI's most advanced, offering superior coding capabilities and agentic task performance: +The GPT-5.x models are OpenAI's most advanced, offering superior coding capabilities and agentic task performance: -- gpt-5-2025-08-07 (default) - Best model for coding and agentic tasks -- gpt-5-mini-2025-08-07 - Faster, cost-efficient for well-defined tasks -- gpt-5-nano-2025-08-07 - Fastest, most cost-efficient option +- gpt-5.1-codex-max (default) - Most intelligent coding model optimized for long-horizon, agentic coding tasks (400K context) +- gpt-5.2 - Flagship model for coding and agentic tasks across industries (400K context) +- gpt-5.2-chat-latest - Optimized for conversational AI and chat use cases +- gpt-5.1 - Best model for coding and agentic tasks across domains (400K context) +- gpt-5.1-codex - Optimized for agentic coding in Codex (400K context) +- gpt-5.1-codex-mini - Cost-efficient version optimized for agentic coding (400K context) -#### GPT-5-Codex +#### GPT-5 Family -OpenAI's specialized coding model with advanced capabilities: - -Key Features: - -400K Token Context Window - Process entire codebases and lengthy documentation -Image Support - Analyze screenshots, diagrams, and visual documentation -Prompt Caching - Reduced costs for repeated context through automatic caching -Adaptive Reasoning - Dynamically adjusts reasoning depth based on task complexity - -Ideal for: Large-scale code analysis, multimodal tasks requiring visual understanding, complex refactoring projects, and extensive codebase operations. +- gpt-5 - Best model for coding and agentic tasks across domains (400K context) +- gpt-5-mini - Faster, cost-efficient for well-defined tasks +- gpt-5-nano - Fastest, most cost-efficient option +- gpt-5-codex - Specialized coding model #### GPT-4.1 Family @@ -307,48 +307,25 @@ Website: https://ai.google.dev/ Datacoves Copilot supports the following Gemini models: -#### Model Aliases (Recommended) - -For stability and automatic updates, use these aliases that point to the latest stable versions: - -- gemini-flash-latest - Always uses the newest stable Flash model -- gemini-pro-latest - Always uses the newest stable Pro model +#### Gemini 3 (Latest) -#### Standard Models +- gemini-3-pro-preview (Recommended - default) - 1M token context window with reasoning support +- gemini-3-flash-preview - Fast, cost-efficient with 1M token context window -- gemini-2.5-flash-preview-05-20 -- gemini-2.5-flash-preview-04-17 -- gemini-2.5-flash-lite-preview-06-17 -- gemini-2.5-pro-exp-03-25 -- gemini-2.0-flash-001 -- gemini-2.0-flash-lite-preview-02-05 -- gemini-2.0-pro-exp-02-05 -- gemini-2.0-flash-exp -- gemini-1.5-flash-002 -- gemini-1.5-flash-exp-0827 -- gemini-1.5-flash-8b-exp-0827 -- gemini-1.5-pro-002 -- gemini-1.5-pro-exp-0827 -- gemini-exp-1206 +#### Gemini 2.5 Pro Models -#### Preview Models +- gemini-2.5-pro - 1M token context with thinking support +- gemini-2.5-pro-preview-06-05 +- gemini-2.5-pro-preview-05-06 +- gemini-2.5-pro-preview-03-25 -Preview models include Google's latest experimental features but may change without notice: +#### Gemini 2.5 Flash Models -- Models with -preview- in the name (e.g., gemini-2.5-flash-preview-05-20) -- Models with -exp- suffix (e.g., gemini-2.0-flash-exp) -- Models prefixed with gemini-exp- (e.g., gemini-exp-1206) - -Preview models are ideal for testing cutting-edge capabilities but may have breaking changes. Use stable aliases for production work. - -#### Thinking Models - -These models require reasoning budget to be enabled in Datacoves Copilot: - -- gemini-2.5-flash-preview-05-20:thinking -- gemini-2.5-flash-preview-04-17:thinking -- gemini-2.0-flash-thinking-exp-01-21 -- gemini-2.0-flash-thinking-exp-1219 +- gemini-flash-latest - Always uses the newest stable Flash model +- gemini-2.5-flash - 1M token context with thinking support +- gemini-2.5-flash-preview-09-2025 +- gemini-flash-lite-latest - Lightweight option +- gemini-2.5-flash-lite-preview-09-2025 Refer to the [Gemini documentation](https://ai.google.dev/gemini-api/docs/models) for more details on each model.