Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

llmgo Examples

Welcome to the llmgo examples directory! These examples are designed to help you understand the core concepts of the framework, from a basic single-agent chat to a highly complex, multi-agent orchestrator with Vector Database integration.

Prerequisites

Since llmgo is built to run entirely locally with zero external API costs, you need to set up Ollama before running these examples.

  1. Install Ollama: Download from ollama.com and start the service.
  2. Pull Required Models: Open your terminal and run the following commands to download the necessary models used in these examples:
# For general reasoning and coding
ollama pull qwen2.5:7b-instruct

# For RAG (Vector Embeddings)
ollama pull nomic-embed-text

Examples Overview

We recommend exploring the examples in the following order:

1. simple_chat (Start Here)

A fundamental example of a single ReAct Agent equipped with standard tools (Calculator, Time). Shows the basic Actor loop and terminal UI.

go run ./examples/simple_chat/main.go

2. stream

Demonstrates how to handle real-time Token Streaming using Go Channels. Essential for building responsive Chat UIs.

go run ./examples/stream/main.go

3. rag (Retrieval-Augmented Generation)

Shows how to split documents, embed them into vectors, and store them in the blazing-fast in-memory NanovecStore for semantic search.

go run ./examples/rag/main.go

4. data_ingestion

A high-performance, concurrent pipeline for ingesting large amounts of local text/markdown files into the Vector Database. (Note: Change directory to the example folder first so the loader can find the ./docs folder).

cd examples/data_ingestion
go run main.go

5. httpserver

A production-ready example showing how to integrate llmgo Multi-Agent system into a standard Go net/http server with Dependency Injection and Context-based request cancellation.

go run ./examples/httpserver/main.go
# In another terminal:
# curl -X POST http://localhost:8080/chat -H "Content-Type: application/json" -d '{"message": "Write a hello world in Go"}'

6. graceful_shutdown

Demonstrates enterprise-grade concurrent teardown. Shows how to safely trap OS signals (Ctrl+C) to stop active Agents and Orchestrator without corrupting memory or database state.

go run ./examples/graceful_shutdown/main.go

7. full_system (The Masterpiece)

The ultimate demonstration of the llmgo framework. It combines everything:

  • VectorDB (RAG Knowledge Base)
  • Tool Registry (Dynamic Capabilities)
  • 3 Specialized Agents (Researcher, Writer, Reviewer)
  • LLM Router (Intelligent Orchestration)
go run ./examples/full_system/main.go

Troubleshooting

  • connection refused error: Make sure the Ollama application is running in the background.
  • model not found error: Make sure you have pulled the exact model name specified in the code using ollama pull <model_name>.
  • High CPU/RAM usage: Local LLMs require significant resources. If qwen2.5:7b-instruct is too slow on your machine, edit the code to use a smaller model like qwen2.5:1.5b or llama3.2.