Natural language interface to any API.
Talk to APIs like you talk to a person. No SDKs, no docs, no boilerplate — just say what you want.
from semanticapi import AgenticProcessor
processor = AgenticProcessor()
processor.add_provider("stripe", api_key="sk_live_...")
result = processor.process("What's my Stripe balance and show my last 3 payments")
print(result.response)
# Your Stripe balance is $4,285.92 (available) with $1,200.00 pending.
#
# Last 3 payments:
# 1. $299.00 — succeeded (Jan 15)
# 2. $49.99 — succeeded (Jan 14)
# 3. $150.00 — refunded (Jan 12)You describe what you want in plain English. The AI agent figures out which APIs to call, calls them, handles pagination and errors, and gives you a human-readable answer.
"send an SMS to +1555-123-4567 saying 'Meeting at 3pm'" → Twilio API
"list my open GitHub issues in the react repo" → GitHub API
"create a $50 payment for customer cus_abc123" → Stripe API
"search my Notion for project planning docs" → Notion API
Works with 163 built-in providers covering 770+ API capabilities — and you can add any API via a simple JSON file.
pip install semanticapiOr clone and install:
git clone https://github.com/petermtj/semanticapi-engine.git
cd semanticapi-engine
pip install -e ".[all]"# Set your LLM key (pick one)
export ANTHROPIC_API_KEY=sk-ant-... # Claude (recommended)
# export OPENAI_API_KEY=sk-... # GPT-4
# export GROQ_API_KEY=gsk_... # Groq (fast + free tier)
# Start the server
uvicorn semanticapi.server:app --port 8080curl -X POST http://localhost:8080/api/query \
-H "Content-Type: application/json" \
-d '{
"query": "get my stripe balance",
"credentials": {
"stripe": {"api_key": "sk_live_..."}
}
}'# Copy and edit .env
cp .env.example .env
# Run
docker compose upThe server starts on port 8080 with all 163 providers loaded.
from semanticapi import SemanticAPI
api = SemanticAPI()
api.add_provider("stripe", api_key="sk_live_...")
api.add_provider("twilio", account_sid="AC...", auth_token="...", from_number="+15551234567")
# Fetch data
payments = api.fetch("get my last 5 stripe payments")
print(payments.data)
# Execute actions
result = api.execute("send SMS to +15559876543 saying 'Hello!'")
print(result.success)The agentic processor uses an LLM to reason through multi-step queries:
from semanticapi import AgenticProcessor
processor = AgenticProcessor(
ai_provider="anthropic", # or "openai", "groq", "ollama"
debug=True,
)
processor.add_provider("stripe", api_key="sk_live_...")
processor.add_provider("github", access_token="ghp_...")
# Multi-step queries — the AI figures out what to call
result = processor.process("What's my Stripe balance? Also show recent payments over $100")
print(result.response)
# It asks for clarification when needed
result = processor.process("send a message to Slack")
if result.status == "needs_input":
print(result.question) # "Which channel should I send the message to?"from semanticapi import process_query
result = process_query("get my stripe balance", stripe={"api_key": "sk_live_..."})
print(result.response)| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Landing page |
/health |
GET | Health check |
/api/query |
POST | Process a natural language query |
/api/providers |
GET | List loaded providers |
/api/providers/{name}/configure |
POST | Set provider credentials |
/docs |
GET | Interactive API docs (Swagger) |
The engine ships with 163 provider definitions covering payments, messaging, AI, DevOps, CRM, analytics, and more.
Featured providers:
| Category | Providers |
|---|---|
| Payments | Stripe, Square, PayPal, Coinbase, Plaid, Wise |
| Messaging | Twilio, Slack, Discord, SendGrid, Mailgun, Postmark |
| AI/ML | OpenAI, Anthropic, Hugging Face, Replicate, ElevenLabs |
| DevOps | GitHub, GitLab, Vercel, Fly.io, AWS, Cloudflare |
| CRM | HubSpot, Salesforce, Intercom, Zendesk |
| Data | Notion, Airtable, Google Sheets, Supabase, Firebase |
| Social | Twitter/X, Reddit, LinkedIn, YouTube |
| And more | 130+ additional providers across every category |
Browse all providers in the providers/ directory.
Create a JSON file — no code needed:
{
"provider": "weatherapi",
"name": "Weather API",
"description": "Real-time weather data",
"base_url": "https://api.weatherapi.com/v1",
"auth": {
"type": "bearer",
"prefix": "Bearer"
},
"capabilities": [
{
"id": "current_weather",
"name": "Get Current Weather",
"description": "Get weather conditions for a location",
"semantic_tags": ["weather", "temperature", "forecast"],
"endpoint": {
"method": "GET",
"path": "/current.json",
"params": {
"q": {
"type": "string",
"required": true,
"description": "City name, zip code, or coordinates"
}
}
}
}
]
}Drop it in the providers/ directory and restart. Done.
┌──────────────────────────────────────────┐
│ User Query │
│ "get my last 5 stripe payments" │
└────────────────┬─────────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Agentic Processor │
│ │
│ ┌─────────┐ ┌──────────┐ ┌─────────┐ │
│ │ LLM │→ │ Tool │→ │ HTTP │ │
│ │ (Claude │ │ Calling │ │ Client │ │
│ │ GPT-4 │ │ Loop │ │ │ │
│ │ Groq) │ │ │ │ │ │
│ └─────────┘ └──────────┘ └─────────┘ │
│ ↑ │ │ │
│ └──────────────┘ │ │
│ Reason about results │ │
└────────────────────────────────────┼──────┘
│
┌───────────────────┼───────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Stripe │ │ GitHub │ │ Slack │
│ API │ │ API │ │ API │
└──────────┘ └──────────┘ └──────────┘
The engine works in a loop:
- Parse — LLM understands the user's intent
- Plan — Selects which API tools to call
- Execute — Makes the HTTP request
- Reason — Examines the result, decides if more calls are needed
- Respond — Synthesizes a human-friendly answer
| Provider | Models | Notes |
|---|---|---|
| Anthropic | Claude 4, Claude 3.5 Sonnet | Recommended. Best tool calling. |
| OpenAI | GPT-4o, GPT-4o-mini | Great alternative. |
| Groq | Llama 3.3 70B, Mixtral | Fast inference, free tier. |
| Ollama | Llama 3.2, Qwen, etc. | Local, free, private. |
# Use any provider
processor = AgenticProcessor(ai_provider="groq")
processor = AgenticProcessor(ai_provider="ollama", model="llama3.2")
processor = AgenticProcessor(ai_provider="openai", model="gpt-4o")The engine includes optional x402 micropayment support — monetize your self-hosted API with USDC on Base (L2 Ethereum).
from semanticapi.x402 import get_price_for_endpoint, get_payment_required_header
# Check if endpoint requires payment
price = get_price_for_endpoint("/api/query") # "0.01" USDC
if price:
header = get_payment_required_header(price)
# Return HTTP 402 with payment instructionsConfigure via environment variables:
X402_WALLET_ADDRESS— your receiving walletX402_NETWORK—eip155:8453(mainnet) oreip155:84532(testnet)X402_REQUIRE_PAYMENT— set totrueto enforce payments
Use Semantic API as a native tool in Claude, ChatGPT, and any MCP-compatible agent:
pip install semanticapi-mcpAdd to Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"semanticapi": {
"command": "uvx",
"args": ["semanticapi-mcp"],
"env": { "SEMANTIC_API_KEY": "your-key" }
}
}
}semanticapi-mcp on PyPI | GitHub
Drop Semantic API into any agent framework as a tool: semantic-api-skill — ready-to-use skill package for agent frameworks.
Query APIs from your terminal with the official CLI:
pip install semanticapi-cli
semanticapi config set-key sapi_your_key
semanticapi query "send an SMS via twilio"
semanticapi discover stripeStdlib only, zero dependencies. See semanticapi-cli for full docs.
Don't want to self-host? Use the managed version at semanticapi.dev — includes auto-discovery of any API, OAuth flows, x402 payments, and a dashboard.
We welcome contributions! See CONTRIBUTING.md.
The easiest way to contribute is adding a new provider — it's just a JSON file.
AGPL-3.0 — Free to use, modify, and self-host. If you modify the engine and offer it as a service, you must open-source your changes.