This guide covers using tinyMem with the official OpenAI Python/Node.js SDKs, as well as any tool that accepts an OPENAI_BASE_URL.
tinyMem acts as a transparent proxy. You change your client's base_url to point to tinyMem (default http://localhost:8080/v1).
- tinyMem receives the user prompt.
- It performs a lexical search in the local project memory.
- It injects relevant memories into the system prompt.
- It forwards the enriched request to the actual LLM provider (OpenAI, Ollama, LM Studio, etc.) defined in your config.
pip install openaifrom openai import OpenAI
# 1. Point to tinyMem Proxy
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="dummy" # tinyMem handles the real auth with the backend
)
# 2. Chat as normal
response = client.chat.completions.create(
model="gpt-4o", # Model name must be valid for the BACKEND
messages=[
{"role": "user", "content": "What is the deployment process for this app?"}
]
)
print(response.choices[0].message.content)Verification:
Check the tinyMem proxy logs. You should see:
[Recall] Found 3 memories for query 'deployment process'
npm install openaiimport OpenAI from 'openai';
const openai = new OpenAI({
baseURL: 'http://localhost:8080/v1',
apiKey: 'dummy',
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'What is the deployment process?' }],
model: 'gpt-4o',
});
console.log(completion.choices[0].message.content);
}
main();Many CLI tools (like fabric, interpreter, etc.) respect standard environment variables.
export OPENAI_API_BASE="http://localhost:8080/v1"
export OPENAI_BASE_URL="http://localhost:8080/v1" # Some tools use this variant
export OPENAI_API_KEY="dummy"
# Run your tool
my-ai-tool "Plan the next sprint based on recent decisions"tinyMem injects headers into the response to let you know what happened:
X-TinyMem-Recall-Count: Number of memories found and injected.X-TinyMem-Recall-Status:injected,none, orfailed.X-TinyMem-Version: Version of tinyMem serving the request.
For full configuration options, see Configuration.md.
- Authentication Errors: If using a real OpenAI backend, ensure the API key is set in
.tinyMem/config.tomlor viaTINYMEM_LLM_API_KEYenv var. The client'sapi_keyis ignored by tinyMem but required by SDKs. - Model Not Found: Ensure the model name you request matches what your backend supports.