Cast scripting spells to animate LLM golems ๐งโโ๏ธโจ
go-llmspell transforms complex LLM interactions into simple, magical scripts. Write spells in Lua, JavaScript, or Tengo that bring AI agents to life, automate conversations, and orchestrate intelligent workflowsโall with the reliability and performance of Go.
-- example spell: creative-writer.lua
local topic = params.topic or "the future of AI"
-- Create an agent with creative writing abilities
local writer = agent.create({
name = "creative_writer",
system_prompt = "You are a creative writer with a vivid imagination."
})
-- Generate a story
local story = writer.run("Write a short story about " .. topic)
-- Save the result
fs.write("story.md", story)
log.info("Story created!", {topic = topic})- ๐ช Scriptable Magic: Write spells in Lua, JavaScript, or Tengo to control LLMs
- ๐ค Agent Orchestration: Create and manage AI agents with tools and workflows
- ๐ง Tool Integration: Build custom tools that agents can use
- โก Go Performance: Native Go speed with embedded scripting flexibility
- ๐ Secure Execution: Sandboxed script execution with resource limits
- ๐ Spellbook Library: Pre-written spells for common AI tasks
- Architecture Overview - System design and components
- Implementation Guide - Development roadmap
- Spell Development Guide - How to write spells
- Getting Started - Quick start guide
- Documentation Index - All documentation
This project is under active development. See our TODO for current tasks and TODO-DONE for completed work.
- โ Architecture designed and documented
- โ go-llms integration complete
- โ Basic project structure
- โ
Core infrastructure implementation (Phase 1 complete)
- โ Engine interface and registry system
- โ Bridge infrastructure with lifecycle management
- โ Security context with resource limits
- โ
LLM Bridge enhancement (Phase 2 complete)
- โ Multi-provider support (OpenAI, Anthropic, Gemini)
- โ Provider switching and model discovery
- โ Type conversion utilities
- โ Comprehensive test coverage
- โ
Lua engine implementation (Phase 3 complete)
- โ Full Lua VM integration with security sandbox
- โ Complete standard library (JSON, HTTP, Storage, Log, Promise)
- โ LLM bridge for Lua scripts
- โ Example spells demonstrating capabilities
- โ
Tool System implementation (Phase 4 complete)
- โ Tool interface and registry for managing tools
- โ Script-based tool creation with parameter validation
- โ Lua bridge for tool system (tools module)
- โ Example tools demonstrating the system
- โ
Agent System implementation (Phase 5 complete)
- โ Agent interface and registry system
- โ Default agent implementation with go-llms integration
- โ Tool integration for agent capabilities
- โ Agent bridge for script access
- โ Lua integration with comprehensive examples
- โ Research, Code Analysis, and Planning agent examples
# Clone the repository
git clone https://github.com/lexlapax/go-llmspell.git
cd go-llmspell
# Initialize submodules (for go-llms reference)
git submodule update --init --recursive
# Build the project
make build
# Run tests
make testThe easiest way to configure API keys is using a .env file:
# Copy the example environment file
cp .env.example .env
# Edit .env and add your API keys:
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...
# GEMINI_API_KEY=AI...The CLI will automatically load the .env file. See Environment Setup for more details.
# Run the async LLM example (demonstrates promises)
./bin/llmspell run examples/spells/async-llm
# Compare multiple LLM providers
./bin/llmspell run examples/spells/provider-compare --param prompt="What is AI?"
# Simple chat assistant demo
./bin/llmspell run examples/spells/chat-assistant- async-llm: Demonstrates promise-based async patterns with LLMs
- provider-compare: Compares responses from multiple providers
- chat-assistant: Interactive chat with conversation history (demo version)
- hello-llm: Basic spell structure example
- lua-agent: Comprehensive agent examples showing:
- Research agent with web_fetch tool integration
- Code analysis agent with custom Lua tools
- Planning agent for task decomposition
- builtin-tools: Demonstrates using built-in tools from go-llms
- tool-example: Shows how to create and use custom tools
# Run a spell
llmspell run my-spell.lua
# List available spells
llmspell list
# Create a new spell
llmspell create my-spellgo-llmspell uses a layered architecture:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Spell Scripts (Lua/JS) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Script Engines โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Bridge Layer โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ go-llms โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Script Engines: Lua (gopher-lua), JavaScript (goja), Tengo
- Bridges: LLM, Tools, Agents, Workflows, StdLib
- Security: Sandboxing, resource limits, filesystem jail
- Spells: Reusable scripts for common tasks
-- Research a topic using web search and LLM analysis
local researcher = spell.load("web-researcher")
local report = researcher.run({topic = "quantum computing"})// Automated code review with AI
const reviewer = await spell.load("code-reviewer");
const review = await reviewer.run({
file: "main.go",
style: "golang"
});-- Generate blog posts with research and editing
local writer = spell.load("blog-writer")
local post = writer.run({
topic = "The Future of AI",
tone = "professional",
length = 1000
})We welcome contributions! Please see our Contributing Guidelines (coming soon).
- Fork the repository
- Create a feature branch
- Write tests first (TDD approach)
- Implement your feature
- Run quality checks:
make fmt vet lint test - Submit a pull request
- go-llms v0.2.6 - LLM provider abstraction
- gopher-lua v1.1.1 - Lua 5.1 VM (integrated)
- goja - JavaScript engine (planned)
- tengo - Embeddable script language (planned)
[License information to be added]
Built on top of the excellent go-llms library.
Note: This project is under active development. APIs and features may change. Check the documentation for the latest information.