A Gleam library for interacting with Large Language Model APIs. Currently supports Google’s Gemini and OpenAI’s GPT APIs with a clean, type-safe interface.
- 🔒 Type-safe: Leverages Gleam’s type system for reliable API interactions
- 🌟 Clean API: Fluent builder pattern for constructing requests
- 🔌 Extensible: Designed to support multiple LLM providers
- ⚡ Async: Built on Gleam’s concurrent model
gleam add llmgleam
import llmgleam
import llmgleam/client
import llmgleam/messages
pub fn main() {
// Create a client
let client = llmgleam.new_client(client.Gemini, "your-api-key-here")
// Get completion using the builder pattern
let result =
client
|> client.request()
|> client.with_message(messages.user("Hello, how are you?"))
|> client.completion("gemini-2.5-flash")
case result {
Ok(completion) -> {
io.println("Response: " <> completion.content)
}
Error(error) -> {
io.println("Error: " <> string.inspect(error))
}
}
}import llmgleam
import llmgleam/client
// For Gemini
let gemini_client = llmgleam.new_client(client.Gemini, "your-gemini-api-key")
// For GPT
let gpt_client = llmgleam.new_client(client.GPT, "your-openai-api-key")The library uses a fluent builder pattern for constructing requests:
import llmgleam/client
import llmgleam/messages
let completion =
client
|> client.request()
|> client.with_message(messages.user("Hello, how are you?"))
|> client.with_system_instruction("You are a helpful assistant")
|> client.completion("gemini-2.5-flash")import llmgleam/messages
// Create a user message
let user_msg = messages.user("Hello, how are you?")
// System instructions are added via with_system_instructionCreates a new LLM client for the specified provider.
Initializes a new request builder from a client.
Adds a message to the request.
Adds a system instruction to the request.
Executes the request and returns a completion.
- Provider:
client.Gemini - Authentication: API key via Google AI Studio
- Provider:
client.GPT - Authentication: API key via OpenAI
import llmgleam
import llmgleam/client
import llmgleam/messages
let client = llmgleam.new_client(client.Gemini, "your-api-key")
let completion =
client
|> client.request()
|> client.with_message(messages.user("hello, how are you?"))
|> client.with_system_instruction("you are a helpful conversationalist")
|> client.completion("gemini-2.5-flash")gleam testFor integration tests (requires API keys):
RUN_INTEGRATION_TESTS=1 GEMINI_KEY=your-key GPT_KEY=your-key gleam testgleam buildContributions are welcome! Areas for improvement:
- [X] Add support for OpenAI GPT models
- [X] Add support for Gemini API
- [ ] Add support for Gemini through vertex.ai
- [ ] Add support for Anthropic Claude
- [ ] Add streaming support
- [ ] Add function calling support
- [ ] Add image/multimodal support
This project is licensed under the MIT License - see the LICENSE file for details.