LangGraph Ollama multi Agent application locally, and how to obtain an API key for LangSmith.
- Python 3.11 or higher
- pip (Python package installer)
- A Google Cloud Platform (GCP) account (for Gemini API access) will be good
- A LangSmith account (for tracing and debugging)
-
Fork and Clone the Repository (if applicable):
git clone <your_repository_url> cd <your_application_directory>
-
Create an
.envfile:Create a file named
.envfrom env_copy in the root directory of your project. This file will store your API keys and other sensitive information.GOOGLE_API_KEY=<your_google_api_key> LANGCHAIN_API_KEY=<your_langsmith_api_key> LANGCHAIN_TRACING_V2="true" LANGCHAIN_PROJECT="Your_LangGraph_Project_Name" Antyhing else api key etc
- Go to Google AI Studio:
- Follow the instructions to create a project and obtain an API key.
- Alternatively, you can obtain a Google Cloud API key from the google cloud console.
- Enable the Gemini API for your project.
- Create API credentials.
- Add the key to your
.envfile asGOOGLE_API_KEY.
- Sign up for LangSmith:
- Obtain your API key:
- After creating an account, navigate to your settings to find your API key.
- Set Environment Variables:
- Add the API key to your
.envfile asLANGCHAIN_API_KEY. - Also add
LANGCHAIN_TRACING_V2="true"to your .env file. - Set
LANGCHAIN_PROJECTto your desired project name.
- Add the API key to your
-
Navigate to your application directory:
cd <your_application_directory> docker-compose up --build
-
Run the application using the
langgraphCLI [optional]:langgraph run --config langgraph.json or langgraph dev
- Ensure that your
langgraph.jsonfile is correctly configured to point to your LangGraph definition.
- Ensure that your
-
Using Langsmith and setting
LANGCHAIN_TRACING_V2="true"in your.envfile will enable tracing of your LangGraph runs on your local machine. This is very useful for debugging. -
You can review your LangGraph runs in the LangSmith UI.
-
Make sure that your
.envfile is not committed to version control, as it contains sensitive information. Add.envto your.gitignorefile. -
If you encounter any issues, refer to the official LangGraph and LangChain documentation.
-
🐳 Running with Docker Compose You can also run the application using Docker Compose, which will spin up both the LangGraph agent and Ollama server.
-
Start the services
docker-compose up --build -
To stop ctrl + c
and
docker-compose down --remove-orphans
Always check the Docker images and containers. Keep your system and dangling images removed otherwise you system can freeze.
- This will launch:
🚀 agent_service: Your LangGraph agent on http://localhost:5000
🧠 ollama_server: The local model server running on http://localhost:11434
Use this for full isolation and easy multi-service orchestration. Recommended system at least 16 GB or 32 GB RAM, optional GPU, i7 or similar. As LLM will be downloaded into your docker container and it require around 5 GB. Higher system will run faster otherwise slow for response to keep patience.
