diff --git a/guides/ai/getting_started_with_chat.mdx b/guides/ai/getting_started_with_chat.mdx
index efb2a5430..65bf05027 100644
--- a/guides/ai/getting_started_with_chat.mdx
+++ b/guides/ai/getting_started_with_chat.mdx
@@ -13,6 +13,7 @@ The chat completions feature is experimental and must be enabled before use. See
## Prerequisites
Before starting, ensure you have:
+
- Meilisearch instance running (v1.15.1 or later)
- An API key from an LLM provider (OpenAI, Azure OpenAI, Mistral, Gemini, or access to a vLLM server)
- At least one index with searchable content
@@ -20,6 +21,16 @@ Before starting, ensure you have:
## Quick start
+### Understanding workspaces
+
+Think of workspaces as different "assistants" you can create for various purposes. Each workspace can have its own personality (system prompt) and capabilities. The best part? **Workspaces are created automatically** when you configure them – no separate creation step needed!
+
+For example:
+
+- `customer-support` - A helpful assistant for customer queries
+- `product-search` - An expert at finding the perfect product
+- `docs-helper` - A technical assistant for documentation
+
### Enable the chat completions feature
First, enable the chat completions experimental feature:
@@ -134,18 +145,6 @@ curl \
}'
```
-## Understanding workspaces
-
-Workspaces allow you to create isolated chat configurations for different use cases:
-
-- **Customer support**: Configure with support-focused prompts
-- **Product search**: Optimize for e-commerce queries
-- **Documentation**: Tune for technical Q&A
-
-Each workspace maintains its own:
-- LLM provider configuration
-- System prompt
-
## Building a chat interface with OpenAI SDK
Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK:
@@ -267,6 +266,70 @@ except Exception as error:
+## Managing conversations
+
+Since Meilisearch keeps your data private and doesn't store conversations, you'll need to manage conversation history in your application. Here's a simple approach:
+
+
+
+```javascript JavaScript
+// Store conversation history in your app
+const conversation = [];
+
+// Add user message
+conversation.push({ role: 'user', content: 'What is Meilisearch?' });
+
+// Get response and add to history
+const response = await client.chat.completions.create({
+ model: 'gpt-3.5-turbo',
+ messages: conversation,
+ stream: true,
+});
+
+// Add assistant response to history
+let assistantMessage = '';
+for await (const chunk of response) {
+ assistantMessage += chunk.choices[0]?.delta?.content || '';
+}
+conversation.push({ role: 'assistant', content: assistantMessage });
+
+// Use the full conversation for follow-up questions
+conversation.push({ role: 'user', content: 'Can it handle typos?' });
+// ... continue the conversation
+```
+
+```python Python
+# Store conversation history in your app
+conversation = []
+
+# Add user message
+conversation.append({"role": "user", "content": "What is Meilisearch?"})
+
+# Get response and add to history
+response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=conversation,
+ stream=True,
+)
+
+# Add assistant response to history
+assistant_message = ""
+for chunk in response:
+ if chunk.choices[0].delta.content is not None:
+ assistant_message += chunk.choices[0].delta.content
+conversation.append({"role": "assistant", "content": assistant_message})
+
+# Use the full conversation for follow-up questions
+conversation.append({"role": "user", "content": "Can it handle typos?"})
+# ... continue the conversation
+```
+
+
+
+
+Remember: Each request is independent, so always send the full conversation history if you want the AI to remember previous exchanges.
+
+
## Next steps
- Explore [advanced chat API features](/reference/api/chats)
diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx
index 802dbe7be..00e46b4a6 100644
--- a/reference/api/chats.mdx
+++ b/reference/api/chats.mdx
@@ -8,6 +8,10 @@ import { RouteHighlighter } from '/snippets/route_highlighter.mdx';
The `/chats` route enables AI-powered conversational search by integrating Large Language Models (LLMs) with your Meilisearch data. This feature allows users to ask questions in natural language and receive contextual answers based on your indexed content.
+
+To optimize how your content is presented to the LLM, configure the [conversation settings for each index](/reference/api/settings#conversation). This allows you to customize descriptions, document templates, and search parameters for better AI responses.
+
+
This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it:
@@ -19,6 +23,7 @@ curl \
"chatCompletions": true
}'
```
+
## Chat completions workspace object
@@ -39,6 +44,10 @@ curl \
Configure the LLM provider and settings for a chat workspace.
+
+If the specified workspace doesn't exist, this endpoint will automatically create it for you. No need to explicitly create workspaces beforehand!
+
+
```json
{
"source": "openAi",
@@ -82,7 +91,6 @@ Configure the LLM provider and settings for a chat workspace.
| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it |
| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it |
-
### Request body
```json
@@ -391,6 +399,19 @@ curl \
}
```
+## Privacy and data storage
+
+
+🔒 **Your conversations are private**: Meilisearch does not store any conversation history or context between requests. Each chat completion request is stateless and independent. Any conversation continuity must be managed by your application.
+
+
+This design ensures:
+
+- Complete privacy of user conversations
+- No data retention of questions or answers
+- Full control over conversation history in your application
+- Compliance with data protection regulations
+
## Authentication
The chat feature integrates with Meilisearch's authentication system:
@@ -549,11 +570,13 @@ This tool reports real-time progress of internal search operations. When declare
**Purpose**: Provides transparency about search operations and reduces perceived latency by showing users what's happening behind the scenes.
**Arguments**:
+
- `call_id`: Unique identifier to track the search operation
- `function_name`: Name of the internal function being executed (e.g., "_meiliSearchInIndex")
- `function_parameters`: JSON-encoded string containing search parameters like `q` (query) and `index_uid`
**Example Response**:
+
```json
{
"function": {
@@ -570,12 +593,14 @@ Since the `/chats/{workspace}/chat/completions` endpoint is stateless, this tool
**Purpose**: Maintains conversation context for better response quality in subsequent requests by preserving tool calls and results.
**Arguments**:
+
- `role`: Message author role ("user" or "assistant")
- `content`: Message content (for tool results)
- `tool_calls`: Array of tool calls made by the assistant
- `tool_call_id`: ID of the tool call this message responds to
**Example Response**:
+
```json
{
"function": {
@@ -592,10 +617,12 @@ This tool provides the source documents that were used by the LLM to generate re
**Purpose**: Shows users which documents were used to generate responses, improving trust and enabling source verification.
**Arguments**:
+
- `call_id`: Matches the `call_id` from `_meiliSearchProgress` to associate queries with results
- `documents`: JSON object containing the source documents with only displayed attributes
**Example Response**:
+
```json
{
"function": {
diff --git a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx
index 8f5efdef6..6733ece79 100644
--- a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx
+++ b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx
@@ -15,6 +15,12 @@ client.index('movies').updateTypoTolerance({
})
```
+```python Python
+client.index('movies').update_typo_tolerance({
+ 'disableOnNumbers': True
+})
+```
+
```php PHP
$client->index('movies')->updateTypoTolerance([
'disableOnNumbers' => true