A powerful local GPT agent with tool calling capabilities and MCP (Model Context Protocol) integrations.
- 🤖 OpenAI Integration: Use GPT-4, GPT-4-turbo, or GPT-3.5-turbo models
- 🛠️ Tool Calling: Built-in tools for file operations, command execution, and more
- 🔌 MCP Support: Load and use external tools via Model Context Protocol
- 💬 Interactive Chat: Conversational interface with context retention
- 🎨 Beautiful CLI: Color-coded output with loading indicators
- ⚙️ Configurable: Environment variables and command-line options
The easiest way to get started is using our pre-built Docker image:
# Pull and run the latest version
docker pull ghcr.io/stickley-ai/stick.gpt:latest
docker run -it --rm -e OPENAI_API_KEY="your-key" ghcr.io/stickley-ai/stick.gpt:latest chat🔗 Find all versions: GitHub Container Registry
# Clone the repository
git clone https://github.com/Stickley-AI/stick.gpt.git
cd stick.gpt
# Install dependencies
npm install
# Set up environment variables
cp .env.example .env
# Edit .env and add your OpenAI API keyCreate a .env file with your OpenAI API key:
OPENAI_API_KEY=your-api-key-here
MODEL=gpt-4o-mini
TEMPERATURE=0.7
MAX_TOKENS=2000Start an interactive chat session:
npm start
# or
node cli.js chatOptions:
-m, --model <model>: Choose the OpenAI model (default: gpt-4o-mini)-t, --temperature <temp>: Set temperature (0.0-2.0)--no-tools: Disable built-in tools--mcp-config <path>: Load MCP configuration file or directory-s, --system <prompt>: Set custom system prompt
Example:
node cli.js chat --model gpt-4o --temperature 0.5 --system "You are a helpful coding assistant"Ask a single question:
node cli.js ask "What is the capital of France?"node cli.js toolsnode cli.js mcp-example -o my-mcp-config.jsonThe agent comes with several built-in tools:
- read_file: Read file contents from the filesystem
- write_file: Write content to a file
- list_directory: List directory contents
- execute_command: Execute shell commands
- get_current_time: Get current date and time
- web_search: Search the web (placeholder)
MCP (Model Context Protocol) allows you to extend the agent with custom tools. Create a JSON configuration file:
{
"name": "my-tools",
"version": "1.0.0",
"description": "Custom tools configuration",
"tools": [
{
"name": "custom_tool",
"description": "Description of what this tool does",
"parameters": {
"type": "object",
"properties": {
"input": {
"type": "string",
"description": "Input parameter description"
}
},
"required": ["input"]
}
}
]
}Load it with:
node cli.js chat --mcp-config ./my-tools.jsonYou: Read the contents of package.json
Assistant: [Uses read_file tool] Here are the contents of package.json...
You: What files are in the current directory?
Assistant: [Uses execute_command tool] Here are the files in the current directory...
You: Create a file called hello.txt with "Hello World" and then read it back to me
Assistant: [Uses write_file and read_file tools] I've created the file with "Hello World" and confirmed its contents...
stick.gpt/
├── agent.js # Core agent implementation
├── tools.js # Built-in tool definitions
├── mcp.js # MCP integration
├── cli.js # Command-line interface
├── package.json # Node.js dependencies
├── .env.example # Environment variable template
└── README.md # Documentation
You can add custom tools programmatically:
const Agent = require('./agent');
const agent = new Agent();
agent.registerTool({
name: 'my_custom_tool',
description: 'Does something custom',
parameters: {
type: 'object',
properties: {
input: { type: 'string', description: 'Input parameter' }
},
required: ['input']
},
handler: async (args) => {
// Your tool implementation
return { success: true, result: 'Done!' };
}
});- Node.js >= 18.0.0
- OpenAI API key
- 📦 Docker Images: GitHub Container Registry - Pre-built Docker images
- 🚀 GitHub Actions: Workflows - Automated builds and deployments
- 📝 npm Package: Coming soon
Multiple deployment options are available via GitHub Actions:
- Docker - Deploy to GitHub Container Registry, cloud platforms, or run locally
- npm - Publish as a global npm package
- Cloud Platforms - Deploy to Azure, AWS, Google Cloud, Heroku, Kubernetes
- Manual - Direct server deployment with PM2 or systemd
See DEPLOYMENT.md for detailed deployment instructions and options.
# Pull and run the latest Docker image
docker pull ghcr.io/stickley-ai/stick.gpt:latest
docker run -it --rm -e OPENAI_API_KEY="your-key" ghcr.io/stickley-ai/stick.gpt:latest chatApache-2.0 - See LICENSE file for details
Contributions are welcome! Please feel free to submit a Pull Request.
For issues and questions, please open an issue on GitHub.
- Full MCP server communication
- Additional built-in tools
- Conversation persistence
- Web search integration
- Multiple AI provider support
- Plugin system
Built with OpenAI's GPT models and designed to integrate with the Model Context Protocol.