|
1 | 1 | # Workflow Executor Agent |
2 | 2 |
|
3 | | -## Quick Start: Key Configuration Variables |
| 3 | +## Table of Contents |
4 | 4 |
|
5 | | -Before proceeding, here are some key configuration variables needed for the workflow executor agent. |
6 | | - |
7 | | -- **SDK_BASE_URL**: The URL to your platform workflow serving API. |
8 | | - |
9 | | - Example: `http://<your-server-ip>:5000/` |
10 | | - |
11 | | - This is where the agent will send workflow execution requests. |
12 | | - |
13 | | -- **SERVING_TOKEN**: The authentication bearer token which is used in the `RequestHandler` class as `api_key`. This is used for authenticating API requests. 3rd party platforms can design their serving workflow API this way for user authentication. |
14 | | - |
15 | | - More details can be found in the code [handle_requests.py](tools/utils/handle_requests.py#L23) |
16 | | - |
17 | | -> **How to get these values:** |
18 | | -> |
19 | | -> - If you are using the provided example workflow API, refer to the test [README.md](tests/README.md) |
20 | | -> - For your own platform, refer to your API documentation or administrator for the correct values. If you are a platform provider you may refer to [Workflow Building Platform](#workflow-building-platform) section for prerequisites on setting up a serving workflow. |
21 | | -
|
22 | | -For more info on using these variables, refer to the [Microservice Setup](#microservice-setup) section below on using the [Example Workflow API](tests/example_workflow/) for a working example. |
23 | | - |
24 | | -Set these variables in your environment before starting the service. |
| 5 | +1. [Overview](#overview) |
| 6 | +2. [Deployment Options](#deployment-options) |
| 7 | +3. [Roadmap](#roadmap) |
25 | 8 |
|
26 | 9 | ## Overview |
27 | 10 |
|
28 | | -GenAI Workflow Executor Example showcases the capability to handle data/AI workflow operations via LangChain agents to execute custom-defined workflow-based tools. These workflow tools can be interfaced from any 3rd-party tools in the market (no-code/low-code/IDE) such as Alteryx, RapidMiner, Power BI, Intel Data Insight Automation which allows users to create complex data/AI workflow operations for different use-cases. |
| 11 | +GenAI Workflow Executor Example showcases the capability to handle data/AI workflow operations via LangChain agents to execute custom-defined workflow-based tools. These workflow tools can be interfaced from any 3rd-party tools in the market (no-code/low-code/IDE) such as Alteryx, RapidMiner, Power BI, and Intel Data Insight Automation, which allows users to create complex data/AI workflow operations for different use-cases. |
29 | 12 |
|
30 | 13 | ### Definitions |
31 | 14 |
|
32 | | -Before we begin, here are the definitions to some terms for clarity: |
33 | | - |
34 | | -- servable/serving workflow - A workflow made ready to be executed through API. It should be able to accept parameter injection for workflow scheduling and have a way to retrieve the final output data. It should also have a unique workflow ID for referencing. For platform providers guide to create their own servable workflows compatible with this example, refer to [Workflow Building Platform](#workflow-building-platform) |
35 | | - |
36 | | -- SDK Class - Performs requests to interface with a 3rd-party API to perform workflow operations on the servable workflow. Found in `tools/sdk.py`. |
37 | | - |
38 | | -- workflow ID - A unique ID for the servable workflow. |
39 | | - |
40 | | -- workflow instance - An instance created from the servable workflow. It is represented as a `Workflow` class created using `DataInsightAutomationSDK.create_workflow()` under `tools/sdk.py`. Contains methods to `start`, `get_status` and `get_results` from the workflow. |
41 | | - |
42 | | -### Workflow Executor |
| 15 | +Before we begin, here are the definitions for some terms for clarity: |
43 | 16 |
|
44 | | -Strategy - This example demonstrates a single React-LangGraph with a `Workflow Executor` tool to ingest a user prompt to execute workflows and return an agent reasoning response based on the workflow output data. |
| 17 | +- **Servable/Serving Workflow**: A workflow made ready to be executed through an API. It should be able to accept parameter injection for workflow scheduling and have a way to retrieve the final output data. It should also have a unique workflow ID for referencing. |
| 18 | +- **SDK Class**: Performs requests to interface with a 3rd-party API to perform workflow operations on the servable workflow. Found in `tools/sdk.py`. |
| 19 | +- **Workflow ID**: A unique ID for the servable workflow. |
| 20 | +- **Workflow Instance**: An instance created from the servable workflow. It is represented as a `Workflow` class created using `DataInsightAutomationSDK.create_workflow()` under `tools/sdk.py`. It contains methods to `start`, `get_status`, and `get_results` from the workflow. |
45 | 21 |
|
46 | | -First the LLM extracts the relevant information from the user query based on the schema of the tool in `tools/tools.yaml`. Then the agent sends this `AgentState` to the `Workflow Executor` tool. |
| 22 | +### Workflow Executor Strategy |
47 | 23 |
|
48 | | -`Workflow Executor` tool requires a SDK class to call the servable workflow API. In the code, `DataInsightAutomationSDK` is the example class as seen under `tools/sdk.py` to interface with several high-level API's. There are 3 steps to this tool implementation: |
| 24 | +This example demonstrates a single ReAct-LangGraph with a `Workflow Executor` tool to ingest a user prompt, execute workflows, and return an agent-reasoned response based on the workflow output data. |
49 | 25 |
|
50 | | -1. Starts the workflow with workflow parameters and workflow id extracted from the user query. |
| 26 | +First, the LLM extracts the relevant information from the user query based on the schema of the tool in `tools/tools.yaml`. Then the agent sends this `AgentState` to the `Workflow Executor` tool. |
51 | 27 |
|
52 | | -2. Periodically checks the workflow status for completion or failure. This may be through a database which stores the current status of the workflow |
| 28 | +The `Workflow Executor` tool requires an SDK class to call the servable workflow API. In the code, `DataInsightAutomationSDK` is the example class (as seen under `tools/sdk.py`) to interface with several high-level APIs. There are 3 steps to this tool implementation: |
53 | 29 |
|
54 | | -3. Retrieves the output data from the workflow through a storage service. |
| 30 | +1. Starts the workflow with workflow parameters and a workflow ID extracted from the user query. |
| 31 | +2. Periodically checks the workflow status for completion or failure. This may be through a database which stores the current status of the workflow. |
| 32 | +3. Retrieves the output data from the workflow through a storage service. |
55 | 33 |
|
56 | 34 | The `AgentState` is sent back to the LLM for reasoning. Based on the output data, the LLM generates a response to answer the user's input prompt. |
57 | 35 |
|
58 | | -Below shows an illustration of this flow: |
| 36 | +Below is an illustration of this flow: |
59 | 37 |
|
60 | 38 |  |
61 | 39 |
|
62 | 40 | ### Workflow Serving for Agent |
63 | 41 |
|
64 | | -#### Workflow Building Platform |
65 | | - |
66 | 42 | The first step is to prepare a servable workflow using a platform with the capabilities to do so. |
67 | 43 |
|
68 | 44 | As an example, here we have a Churn Prediction use-case workflow as the serving workflow for the agent execution. It is created through Intel Data Insight Automation platform. The image below shows a snapshot of the Churn Prediction workflow. |
@@ -92,116 +68,21 @@ When the workflow is configured as desired, transform this into a servable workf |
92 | 68 | > [!NOTE] |
93 | 69 | > Remember to create a unique workflow ID along with the servable workflow. |
94 | 70 |
|
95 | | -#### Using Servable Workflow |
96 | | - |
97 | | -Once we have our servable workflow ready, the serving workflow API can be prepared to accept requests from the SDK class. Refer to [Start Agent Microservice](#start-agent-microservice) on how to do this. |
98 | | - |
99 | | -To start prompting the agent microservice, we will use the following command for this churn prediction use-case: |
100 | | - |
101 | | -```sh |
102 | | -curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{ |
103 | | - "query": "I have a data with gender Female, tenure 55, MonthlyAvgCharges 103.7. Predict if this entry will churn. My workflow id is '${workflow_id}'." |
104 | | - }' |
105 | | -``` |
106 | | - |
107 | | -The user has to provide a `workflow_id` and workflow `params` in the query. Notice that the `query` string includes all the workflow `params` which the user has defined in the workflow. The LLM will extract these parameters into a dictionary format for the workflow `Serving Parameters` as shown below: |
108 | | - |
109 | | -```python |
110 | | -params = {"gender": "Female", "tenure": 55, "MonthlyAvgCharges": 103.7} |
111 | | -``` |
112 | | - |
113 | | -These parameters will be passed into the `Workflow Executor` tool to start the workflow execution of specified `workflow_id`. Thus, everything will be handled via the microservice. |
114 | | - |
115 | | -And finally here are the results from the microservice logs: |
116 | | - |
117 | | - |
118 | | - |
119 | | -## Microservice Setup |
120 | | - |
121 | | -### Start Agent Microservice |
122 | | - |
123 | | -For an out-of-box experience there is an example workflow serving API service prepared for users under [Example Workflow API](tests/example_workflow/) to interface with the SDK. This section will guide users on setting up this service as well. Users may modify the logic, add your own database etc for your own use-case. |
| 71 | +## Deployment Options |
124 | 72 |
|
125 | | -There are 3 services needed for the setup: |
| 73 | +The table below lists currently available deployment options. They outline in detail the implementation of this example on selected hardware. |
126 | 74 |
|
127 | | -1. Agent microservice |
128 | | - |
129 | | -2. LLM inference service - specified as `llm_endpoint_url`. |
130 | | - |
131 | | -3. workflow serving API service - specified as `SDK_BASE_URL` |
132 | | - |
133 | | -Workflow Executor will have a single docker image. First, build the agent docker image. |
134 | | - |
135 | | -```sh |
136 | | -git clone https://github.com/opea-project/GenAIExamples.git |
137 | | -cd GenAIExamples//WorkflowExecAgent/docker_image_build/ |
138 | | -docker compose -f build.yaml build --no-cache |
139 | | -``` |
140 | | - |
141 | | -Configure `GenAIExamples/WorkflowExecAgent/docker_compose/.env` file with the following. Replace the variables according to your usecase. |
142 | | - |
143 | | -```sh |
144 | | -export wf_api_port=5000 # workflow serving API port to use |
145 | | -export SDK_BASE_URL=http://$(hostname -I | awk '{print $1}'):${wf_api_port}/ # The workflow server will use this example workflow API url |
146 | | -export SERVING_TOKEN=${SERVING_TOKEN} # Authentication token. For example_workflow test, can be empty as no authentication required. |
147 | | -export ip_address=$(hostname -I | awk '{print $1}') |
148 | | -export HF_TOKEN=${HF_TOKEN} |
149 | | -export llm_engine=${llm_engine} |
150 | | -export llm_endpoint_url=${llm_endpoint_url} |
151 | | -export WORKDIR=${WORKDIR} |
152 | | -export TOOLSET_PATH=$WORKDIR/GenAIExamples/WorkflowExecAgent/tools/ |
153 | | -export http_proxy=${http_proxy} |
154 | | -export https_proxy=${https_proxy} |
155 | | - |
156 | | -# LLM variables |
157 | | -export model="mistralai/Mistral-7B-Instruct-v0.3" |
158 | | -export recursion_limit=${recursion_limit} |
159 | | -export temperature=0 |
160 | | -export max_new_tokens=1000 |
161 | | -``` |
162 | | - |
163 | | -Launch service by running the docker compose command. |
164 | | - |
165 | | -```sh |
166 | | -cd $WORKDIR/GenAIExamples/WorkflowExecAgent/docker_compose |
167 | | -docker compose -f compose.yaml up -d |
168 | | -``` |
169 | | - |
170 | | -To launch the example workflow API server, open a new terminal and run the following: |
171 | | - |
172 | | -```sh |
173 | | -cd $WORKDIR/GenAIExamples/WorkflowExecAgent/tests/example_workflow |
174 | | -. launch_workflow_service.sh |
175 | | -``` |
176 | | - |
177 | | -`launch_workflow_service.sh` will setup all the packages locally and launch the uvicorn server to host the API on port 5000. For a Dockerfile method, please refer to `Dockerfile.example_workflow_api` file. |
178 | | - |
179 | | -### Validate service |
180 | | - |
181 | | -The agent microservice logs can be viewed using: |
182 | | - |
183 | | -```sh |
184 | | -docker logs workflowexec-agent-endpoint |
185 | | -``` |
186 | | - |
187 | | -You should be able to see "HTTP server setup successful" upon successful startup. |
188 | | - |
189 | | -You can validate the service using the following command: |
190 | | - |
191 | | -```sh |
192 | | -curl http://${ip_address}:9090/v1/chat/completions -X POST -H "Content-Type: application/json" -d '{ |
193 | | - "query": "I have a data with gender Female, tenure 55, MonthlyCharges 103.7, TotalCharges 1840.75. Predict if this entry will churn. My workflow id is '${workflow_id}'." |
194 | | - }' |
195 | | -``` |
196 | | - |
197 | | -Update the `query` with the workflow parameters, workflow id, etc based on the workflow context. |
198 | | - |
199 | | -## Roadmap |
200 | | - |
201 | | -Phase II: Agent memory integration to enable capability to store tool intermediate results, such as workflow instance key. |
| 75 | +| Category | Deployment Option | Description | |
| 76 | +| ---------------------- | ----------------- | --------------------------------------------------------------------------------- | |
| 77 | +| On-premise Deployments | Docker compose | [WorkflowExecAgent deployment on Xeon](./docker_compose/intel/cpu/xeon/README.md) | |
| 78 | +| | Kubernetes | Work-in-progress | |
202 | 79 |
|
203 | 80 | ## Validated Configurations |
204 | 81 |
|
205 | 82 | | **Deploy Method** | **Hardware** | |
206 | 83 | | ----------------- | ------------ | |
207 | 84 | | Docker Compose | Intel Xeon | |
| 85 | + |
| 86 | +## Roadmap |
| 87 | + |
| 88 | +Phase II: Agent memory integration to enable the capability to store tool intermediate results, such as a workflow instance key. |
0 commit comments