Welcome to the llmgo examples directory! These examples are designed to help you understand the core concepts of the framework, from a basic single-agent chat to a highly complex, multi-agent orchestrator with Vector Database integration.
Since llmgo is built to run entirely locally with zero external API costs, you need to set up Ollama before running these examples.
- Install Ollama: Download from ollama.com and start the service.
- Pull Required Models: Open your terminal and run the following commands to download the necessary models used in these examples:
# For general reasoning and coding
ollama pull qwen2.5:7b-instruct
# For RAG (Vector Embeddings)
ollama pull nomic-embed-textWe recommend exploring the examples in the following order:
A fundamental example of a single ReAct Agent equipped with standard tools (Calculator, Time). Shows the basic Actor loop and terminal UI.
go run ./examples/simple_chat/main.goDemonstrates how to handle real-time Token Streaming using Go Channels. Essential for building responsive Chat UIs.
go run ./examples/stream/main.goShows how to split documents, embed them into vectors, and store them in the blazing-fast in-memory NanovecStore for semantic search.
go run ./examples/rag/main.goA high-performance, concurrent pipeline for ingesting large amounts of local text/markdown files into the Vector Database.
(Note: Change directory to the example folder first so the loader can find the ./docs folder).
cd examples/data_ingestion
go run main.goA production-ready example showing how to integrate llmgo Multi-Agent system into a standard Go net/http server with Dependency Injection and Context-based request cancellation.
go run ./examples/httpserver/main.go
# In another terminal:
# curl -X POST http://localhost:8080/chat -H "Content-Type: application/json" -d '{"message": "Write a hello world in Go"}'Demonstrates enterprise-grade concurrent teardown. Shows how to safely trap OS signals (Ctrl+C) to stop active Agents and Orchestrator without corrupting memory or database state.
go run ./examples/graceful_shutdown/main.goThe ultimate demonstration of the llmgo framework. It combines everything:
- VectorDB (RAG Knowledge Base)
- Tool Registry (Dynamic Capabilities)
- 3 Specialized Agents (Researcher, Writer, Reviewer)
- LLM Router (Intelligent Orchestration)
go run ./examples/full_system/main.goconnection refusederror: Make sure the Ollama application is running in the background.model not founderror: Make sure you have pulled the exact model name specified in the code usingollama pull <model_name>.- High CPU/RAM usage: Local LLMs require significant resources. If
qwen2.5:7b-instructis too slow on your machine, edit the code to use a smaller model likeqwen2.5:1.5borllama3.2.