You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|| Amazon Titan Embeddings Text v2 | Embeddings | text | embeddings||
83
83
|| Anthropic Claude 2.0, Anthropic Claude 2.1| Chat Completions | text, document | text ||
84
-
|| Anthropic Claude 3 Sonnet, Anthropic Claude 3.5 Sonnet, Anthropic Claude 3.5 Sonnet v2, Anthropic Claude 3 Haiku, Anthropic Claude 3 Opus, Anthropic Claude 3.5 Haiku, Anthropic Claude 3.7 Sonnet | Chat Completions | text, image, document | text | Function calling |
84
+
|| Anthropic Claude 3 Sonnet, Anthropic Claude 3.5 Sonnet, Anthropic Claude 3.5 Sonnet v2, Anthropic Claude 3 Haiku, Anthropic Claude 3 Opus, Anthropic Claude 3.5 Haiku, Anthropic Claude 3.7 Sonnet, Anthropic Claude 4 Sonnet| Chat Completions | text, image, document | text | Function calling |
Copy file name to clipboardExpand all lines: content/en/docs/marketplace/genai/concepts/function-calling.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,39 +14,39 @@ The LLM (e.g. OpenAI ChatGPT, Anthropic Claude) does not call the function. The
14
14
15
15
## High-level flow {#high-level}
16
16
17
-
If you use the `Chat Completions (without history)` or `Chat Completions (with history)` actions for text generation with function calling, the LLM connector (OpenAI Connector or Amazon Bedrock Connector) will handle the whole process for you in just one step:
17
+
If you use the `Chat Completions (without history)` or `Chat Completions (with history)` actions for text generation with function calling, the LLM connector will handle the whole process for you in just one step:
18
18
19
19
1. Invoke the chat completions API with a user prompt and a collection of available functions (microflows) with their expected input parameter.
20
20
21
21
The model will decide which function (microflow) should be called within the LLM connector, if any. The response of the operation will be based on the information you provide and the response of any function (microflow) that was called.
22
22
23
-
This automates the following process happening inside the LLM connector (OpenAI Connector, Amazon Bedrock Connector):
23
+
This automates the following process happening inside the LLM connector:
24
24
25
25
1. Invoke the chat completions API with a user prompt and a collection of available functions (microflows) with their expected input parameters.
26
-
2. The model decides which function (microflow) should be called, if any, based on the user prompt and the available functions. If a function should be called, the content of the assistant's response will be a stringified JSON object containing the input parameter of the function as described in the request. Note that the LLM can possibly hallucinate parameters, so they should be validated inside the function microflow before being used.
27
-
3. The LLM connector parses the string into JSON and executes the function microflow with its input parameter.
26
+
2. The model decides which function (microflow) should be called, if any, based on the user prompt and the available functions. If a function should be called, the content of the assistant's response will be a stringified JSON object containing the input parameters of the function as described in the request. Note that the LLM can possibly hallucinate parameters, so they should be validated inside the function microflow before being used.
27
+
3. The LLM connector parses the string into JSON and executes the function microflow with its input parameters.
28
28
4. The existing list of messages is appended with a new tool message containing the function response. Then, the chat completions API is invoked again and the model can answer the initial prompt with the new information provided by the function.
29
29
30
30
For more general information on this topic, see [OpenAI: Function Calling](https://platform.openai.com/docs/guides/function-calling) or [Anthropic Claude: Tool Use](https://docs.anthropic.com/en/docs/tool-use).
31
31
32
32
## Function Calling with the GenAI Commons Module and the LLM Connectors {#llm-connector}
33
33
34
-
Both the [OpenAI Connector](/appstore/modules/genai/openai/) and [Amazon Bedrock Connector](/appstore/modules/aws/amazon-bedrock/) support function calling by leveraging the [GenAI Commons module](/appstore/modules/genai/commons/). In both connectors, function calling is supported for all chat completions operations. All entity, attribute and activity names in this section refer to the GenAI Commons module.
34
+
All platform-supported connectors ([Mendix Cloud GenAI](/appstore/modules/genai/mx-cloud-genai/MxGenAI-connector/), [OpenAI](/appstore/modules/genai/openai/), and [Amazon Bedrock Connector](/appstore/modules/aws/amazon-bedrock/)) support function calling by leveraging the [GenAI Commons module](/appstore/modules/genai/commons/). Function calling is supported for all chat completions operations. All entity, attribute, and activity names in this section refer to the GenAI Commons module.
35
35
36
-
Functions in Mendix are essentially microflows that can be registered within the request to the LLM. The LLM connector takes care of handling the tool call response as well as executing the function microflow(s) until the LLM returns the final assistant's response. Currently, function microflows can either have no input parameters or one input parameter of type string and must return a string.
36
+
Functions in Mendix are essentially microflows that can be registered within the request to the LLM. The LLM connector takes care of handling the tool call response as well as executing the function microflows until the LLM returns the final assistant's response. Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request) or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The microflow can only return a String value.
37
37
38
38
To enable function calling, a `ToolCollection` object must be added to the request, which is associated to one or many `Function` objects.
39
39
40
40
A helper operation is available in [GenAI Commons](/appstore/modules/genai/commons/) to construct the `ToolCollection` with a list of `Functions`:
41
41
42
-
* Tools: Add Function to Request can be used to initialize a new `ToolCollection` and add a new `Function` to it in order to enable function calling.
42
+
*`Tools: Add Function to Request` can be used to initialize a new `ToolCollection` and add a new `Function` to it in order to enable function calling.
43
43
44
44
Depending on the user prompt and the available functions, the model can suggest one or multiple tool calls to the same or different functions or there might be multiple API calls followed by new tools calls until the model returns the final assistant's response.
45
45
A way to steer the function calling process is the `ToolChoice` parameter. This optional attribute on the Request entity controls which (if any) function is called by the model.
46
46
47
47
A helper operation is available in GenAI Commons to define the Tool Choice:
48
48
49
-
* Tools: Set Tool Choice can be used to set the `ToolChoice` parameter and the `ToolCollection_ToolChoice` association accordingly.
49
+
*`Tools: Set Tool Choice` can be used to set the `ToolChoice` parameter and the `ToolCollection_ToolChoice` association accordingly.
50
50
51
51
{{% alert color="warning" %}}
52
52
Function calling is a very powerful capability, but may be used with caution. Please note that function microflows run in the context of the current user without enforcing entity-access. You can use `$currentUser` in XPath queries to ensure you retrieve and return only information that the end-user is allowed to view; otherwise confidential information may become visible to the current end-user in the assistant's response.
@@ -62,15 +62,15 @@ For models used through Azure OpenAI, feature availability is currently differen
Multiple models available on Amazon Bedrock support function calling. In Bedrock documentation, function calling is often addressed as *Tool Use*, which describes the same concept.
65
+
Multiple models available on Amazon Bedrock support function calling. In the Bedrock documentation, function calling is often addressed as *Tool Use*, which describes the same concept.
66
66
A detailed overview showing which models support function calling (tool use) can be found [here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html#conversation-inference-supported-models-features).
67
67
68
68
## Use cases {#use-cases}
69
69
70
70
Function calling can be used for a variety of use cases including the following:
71
71
72
72
* Creating assistants that can answer questions about data from your Mendix database or a knowledge base
73
-
* for example, getTicketById (string identifier) or findSimilarTickets (string description)
73
+
* for example, getTicketById (integer identifier) or findSimilarTickets (string description)
74
74
* Creating assistants that can get information from external APIs
75
75
* for example, getCurrentWeather (string location)
76
76
* Extracting structured data from natural language
Copy file name to clipboardExpand all lines: content/en/docs/marketplace/genai/mendix-cloud-genai/Mx GenAI Connector.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -141,7 +141,7 @@ Function calling enables LLMs to connect with external tools to gather informati
141
141
142
142
The model does not call the function but rather returns a tool called JSON structure that is used to build the input of the function (or functions) so that they can be executed as part of the chat completions operation. Functions in Mendix are essentially microflows that can be registered within the request to the LLM. The connector takes care of handling the tool call response and executing the function microflows until the API returns the assistant's final response.
143
143
144
-
Function microflows take a singleinput parameter of type string and optionally a Request and/or Tool object or no input parameter at all and return a string.
144
+
Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request)or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value.
145
145
146
146
{{% alert color="warning" %}}
147
147
Function calling is a highly effective capability and should be used with caution. Function microflows run in the context of the current user, without enforcing entity access. You can use `$currentUser` in XPath queries to ensure that you retrieve and return only information that the end-user is allowed to view; otherwise, confidential information may become visible to the current end-user in the assistant's response.
*[OpenAI Connector](/appstore/modules/genai/reference-guide/external-connectors/openai/#general-configuration) (for Azure knowledge bases, available in an upcoming release)
To allow an agent to perform semantic searches, add the knowledge base to the agent definition and configure the retrieval parameters, such as the number of chunks to retrieve, and the threshold similarity. Multiple knowledge bases can be added to the agent to pick from. Give each knowledge base a name and description (in human language) so that the model can decide which retrieves are necessary based on the input it gets.
Copy file name to clipboardExpand all lines: content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -210,7 +210,7 @@ OpenAI does not call the function. The model returns a tool called JSON structur
210
210
211
211
This is all part of the implementation that is executed by the GenAI Commons chat completions operations mentioned before. As a developer, you have to make the system aware of your functions and what these do by registering the function(s) to the request. This is done using the GenAI Commons operation [Tools: Add Function to Request](/appstore/modules/genai/genai-for-mx/commons/#add-function-to-request) once per function before passing the request to the chat completions operation.
212
212
213
-
Currently, the connector supports the calling of Function microflows that take a single input parameter of type string and optionally a Request and/or Tool object or no input parameter at all and return a string.
213
+
Function microflows can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](/appstore/modules/genai/genai-for-mx/commons/#request)or [Tool](/appstore/modules/genai/genai-for-mx/commons/#tool) objects as inputs. The function microflow must return a String value.
214
214
215
215
{{% alert color="warning" %}}
216
216
Function calling is a very powerful capability and should be used with caution. Function microflows run in the context of the current user, without enforcing entity access. You can use `$currentUser` in XPath queries to ensure that you retrieve and return only information that the end-user is allowed to view; otherwise, confidential information may become visible to the current end-user in the assistant's response.
Copy file name to clipboardExpand all lines: content/en/docs/marketplace/genai/reference-guide/external-platforms/pg-vector-knowledge-base/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,7 @@ You must perform the following steps to integrate a Mendix app integrate a PgVec
63
63
64
64
1. Add the module role **PgVectorKnowledgeBase.Administrator** to your Administrator user role in the security settings of your app. Optionally, map **GenAICommons.User** to any user roles that need read access directly on retrieved entities.
65
65
2. Add the **DatabaseConfiguration_Overview** page (**USE_ME > Configuration**) to your navigation, or add the **Snippet_DatabaseConfigurations** to a page that is already part of your navigation.
66
-
3. Set up your database configurations at runtime. For more information, see the [Configuring the Database Connection Details](/appstore/modules/genai/reference-guide/external-connectors/pgvector-setup/#configure-database-connection) section in *Setting up a Vector Database*.
66
+
3. Set up your database configurations at runtime. For more information, see the [Configuring the Database Connection Details](/appstore/modules/genai/reference-guide/external-connectors/pgvector-setup/#configure-database-connection) section in *Setting up a Vector Database*. Selecting an embeddings model is optional and only required if you plan to use PgVector for the [Tools: Add Knowledge Base](/appstore/modules/genai/genai-for-mx/commons/#add-knowledge-base-to-request) action.
67
67
68
68
{{% alert color="info" %}}
69
69
It is possible to have multiple knowledge bases in the same database in parallel by providing different knowledge base names in combination with the same `DatabaseConfiguration`.
Copy file name to clipboardExpand all lines: content/en/docs/marketplace/genai/reference-guide/genai-commons.md
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -219,10 +219,18 @@ A tool call object may be generated by the model in certain scenarios, such as a
219
219
| Attribute | Description |
220
220
| --- | --- |
221
221
|`Name`| The name of the tool to call. This refers to the `Name` attribute of one of the [Tools](#tool) in the Request. |
222
-
| `Arguments` | The arguments with which the tool is to be called, as generated by the model in JSON format. Note that the model does not always generate valid JSON and may hallucinate parameters that are not defined by your tool's schema. Mendix recommends validating the arguments in the code before calling the tool.
223
222
|`ToolType`| The type of the tool. View AI provider documentation for supported types. |
224
223
|`ToolCallId`| This is a model-generated ID of the proposed tool call. It is used by the model to map an assistant message containing a tool call with the output of the tool call (tool message). |
225
224
225
+
#### `Argument` {#argument}
226
+
227
+
The arguments are used to call the tool, generated by the model in JSON format. Note that the model does not always generate valid JSON and may hallucinate parameters that are not defined by your tool's schema. Mendix recommends validating the arguments in the code before calling the tool. One argument is generated for each primitive input parameter of the selected microflow.
228
+
229
+
| Attribute | Description |
230
+
| --- | --- |
231
+
|`Key`| The name of the input parameter as given in the microflow. |
232
+
|`Value`| The value that is passed to the input parameter. |
233
+
226
234
#### `Reference` {#reference}
227
235
228
236
An optional reference for a response message.
@@ -496,7 +504,7 @@ To include files within a message, you must provide them in the form of a file c
496
504
497
505
##### Tools: Add Function to Request {#add-function-to-request}
498
506
499
-
Adds a new Function to a [ToolCollection](#toolcollection) that is part of a Request. Use this microflow when you have microflows in your application that may be called to retrieve the required information as part of a GenAI interaction. If you want the model to be aware of these microflows, you can use this operation to add them as functions to the request. If supported by the LLM connector, the chat completion operation calls the right functions based on the LLM response and continues the process until the assistant's final response is returned.
507
+
Adds a new Function to a [ToolCollection](#toolcollection) that is part of a Request. Use this action to expose microflows as tools to the LLM via [function calling](/appstore/modules/genai/function-calling/). If supported by the LLM connector, the chat completion operation calls the right functions based on the LLM response and continues the process until the assistant's final response is returned.
500
508
501
509
###### Input Parameters
502
510
@@ -505,10 +513,12 @@ Adds a new Function to a [ToolCollection](#toolcollection) that is part of a Req
505
513
|`Request`|[Request](#request)| mandatory | The request to add the function to. |
506
514
|`ToolName`| String | mandatory | The name of the tool to use/call. |
507
515
|`ToolDescription`| String | optional | An optional description of what the tool does, used by the model to choose when and how to call the tool. |
508
-
|`FunctionMicroflow`| Microflow | mandatory | The microflow that is called within this function. A function microflow can only have a single string input parameter or no input parameter and returns a string. |
516
+
|`FunctionMicroflow`| Microflow | mandatory | The microflow that is called within this function. |
509
517
510
518
{{% alert color="info" %}}
511
519
Since this microflow runs in the context of the user, you can make sure that it only shows data that is relevant to the current user.
520
+
The microflow can have none, a single, or multiple primitive input parameters such as Boolean, Datetime, Decimal, Enumeration, Integer or String. Additionally, they may accept the [Request](#request) or [Tool](#tool) objects as inputs. The microflow can only return a String value.
521
+
Note that calling the microflow may fail if the model passes parameters in the wrong format, for example, a decimal number for an integer parameter. Such errors are logged and returned to the model, which may either inform the user or retry the tool call. The model can also pass empty values, so proper validation is recommended.
0 commit comments