Find the fastest free coding model in seconds
Ping 160 models across 20 AI Free providers in real-time
Install Free API endpoints to your favorite AI coding tool:
OpenCode, OpenClaw, Crush, Goose, Aider, Qwen Code, OpenHands, Amp or Pi in one keystroke
npm install -g free-coding-models
free-coding-modelsWhy • Quick Start • Providers • Usage • TUI Keys • Contributing
Made with ❤️ and ☕ by Vanessa Depraute (aka Vava-Nessa)
There are 160+ free coding models scattered across 20 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
This CLI pings them all in parallel, shows live latency, and calculates a live Stability Score (0-100). Average latency alone is misleading if a model randomly spikes to 6 seconds; the stability score measures true reliability by combining p95 latency (30%), jitter/variance (30%), spike rate (20%), and uptime (20%).
It then writes the model you pick directly into your coding tool's config — so you go from "which model?" to "coding" in under 10 seconds.
① Get a free API key — you only need one to get started:
160 coding models across 20 providers, ranked by SWE-bench Verified.
| Provider | Models | Tier range | Free tier | Env var |
|---|---|---|---|---|
| NVIDIA NIM | 44 | S+ → C | 40 req/min (no credit card needed) | NVIDIA_API_KEY |
| iFlow | 11 | S+ → A+ | Free for individuals (no req limits, 7-day key expiry) | IFLOW_API_KEY |
| ZAI | 7 | S+ → S | Free tier (generous quota) | ZAI_API_KEY |
| Alibaba DashScope | 8 | S+ → A | 1M free tokens per model (Singapore region, 90 days) | DASHSCOPE_API_KEY |
| Groq | 10 | S → B | 30‑50 RPM per model (varies by model) | GROQ_API_KEY |
| Cerebras | 7 | S+ → B | Generous free tier (developer tier 10× higher limits) | CEREBRAS_API_KEY |
| SambaNova | 12 | S+ → B | Dev tier generous quota | SAMBANOVA_API_KEY |
| OpenRouter | 11 | S+ → C | Free on 🆓 50/day <$10, 1000/day ≥$10 (20 req/min) | OPENROUTER_API_KEY |
| Hugging Face | 2 | S → B | Free monthly credits (~$0.10) | HUGGINGFACE_API_KEY |
| Together AI | 7 | S+ → A- | Credits/promos vary by account (check console) | TOGETHER_API_KEY |
| DeepInfra | 2 | A- → B+ | 200 concurrent requests (default) | DEEPINFRA_API_KEY |
| Fireworks AI | 2 | S | $1 credits – 10 req/min without payment | FIREWORKS_API_KEY |
| Mistral Codestral | 1 | B+ | 30 req/min, 2000/day | CODESTRAL_API_KEY |
| Hyperbolic | 10 | S+ → A- | $1 free trial credits | HYPERBOLIC_API_KEY |
| Scaleway | 7 | S+ → B+ | 1M free tokens | SCALEWAY_API_KEY |
| Google AI Studio | 3 | B → C | 14.4K req/day, 30/min | GOOGLE_API_KEY |
| SiliconFlow | 6 | S+ → A | Free models: usually 100 RPM, varies by model | SILICONFLOW_API_KEY |
| Cloudflare Workers AI | 6 | S → B | Free: 10k neurons/day, text-gen 300 RPM | CLOUDFLARE_API_TOKEN + CLOUDFLARE_ACCOUNT_ID |
| Perplexity API | 4 | A+ → B | Tiered limits by spend (default ~50 RPM) | PERPLEXITY_API_KEY |
| Replicate | 1 | A- | 6 req/min (no payment) – up to 3,000 RPM with payment | REPLICATE_API_TOKEN |
💡 One key is enough. Add more at any time with
Pinside the TUI.
| Tier | SWE-bench | Best for |
|---|---|---|
| S+ | ≥ 70% | Complex refactors, real-world GitHub issues |
| S | 60–70% | Most coding tasks, strong general use |
| A+/A | 40–60% | Solid alternatives, targeted programming |
| A-/B+ | 30–40% | Smaller tasks, constrained infra |
| B/C | < 30% | Code completion, edge/minimal setups |
② Install and run:
npm install -g free-coding-models
free-coding-modelsOn first run, you'll be prompted to enter your API key(s). You can skip providers and add more later with P.
Need to fix contrast because your terminal theme is fighting the TUI? Press G at any time to cycle Auto → Dark → Light. The switch recolors the full interface live: table, Settings, Help, Smart Recommend, Feedback, and Changelog.
③ Pick a model and launch your tool:
↑↓ navigate → Enter to launch
The model you select is automatically written into your tool's config (OpenCode, OpenClaw, Crush, etc.) and the tool opens immediately. Done.
If the active CLI tool is missing, FCM now catches it before launch, offers a tiny Yes/No install prompt, installs the tool with its official global command, then resumes the same model launch automatically.
💡 You can also run
free-coding-models --goose --tier Sto pre-filter to S-tier models for Goose before the TUI even opens.
# "I want the most reliable model right now"
free-coding-models --fiable
# "I want to configure Goose with an S-tier model"
free-coding-models --goose --tier S
# "I want NVIDIA's top models only"
free-coding-models --origin nvidia --tier S
# "Show me only elite models that are currently healthy"
free-coding-models --premium
# "I want to script this — give me JSON"
free-coding-models --tier S --json | jq -r '.[0].modelId'
# "I want to configure OpenClaw with Groq's fastest model"
free-coding-models --openclaw --origin groq| Flag | Launches |
|---|---|
--opencode |
OpenCode CLI |
--opencode-desktop |
OpenCode Desktop |
--openclaw |
OpenClaw |
--crush |
Crush |
--goose |
Goose |
--aider |
Aider |
--qwen |
Qwen Code |
--openhands |
OpenHands |
--amp |
Amp |
--pi |
Pi |
Press Z in the TUI to cycle between tools without restarting.
| Key | Action |
|---|---|
↑↓ |
Navigate models |
Enter |
Launch selected model in active tool |
Z |
Cycle target tool |
T |
Cycle tier filter |
D |
Cycle provider filter |
E |
Toggle configured-only mode |
F |
Favorite / unfavorite model |
G |
Cycle global theme (Auto → Dark → Light) |
R/S/C/M/O/L/A/H/V/B/U |
Sort columns |
P |
Settings (API keys, providers, updates, theme) |
Y |
Install Endpoints (push provider into tool config) |
Q |
Smart Recommend overlay |
N |
Changelog |
W |
Cycle ping cadence |
I |
Feedback / bug report |
K |
Help overlay |
Ctrl+C |
Exit |
→ Stability score & column reference
- Parallel pings — all 160 models tested simultaneously via native
fetch - Adaptive monitoring — 2s burst for 60s → 10s normal → 30s idle
- Stability score — composite 0–100 (p95 latency, jitter, spike rate, uptime)
- Smart ranking — top 3 highlighted 🥇🥈🥉
- Favorites — pin models with
F, persisted across sessions - Configured-only default — only shows providers you have keys for
- Keyless latency — models ping even without an API key (show 🔑 NO KEY)
- Smart Recommend — questionnaire picks the best model for your task type
- Install Endpoints — push a full provider catalog into any tool's config (
Y) - Missing tool bootstrap — detect absent CLIs, offer one-click install, then continue the selected launch automatically
- Width guardrail — shows a warning instead of a broken table in narrow terminals
- Readable everywhere — semantic theme palette keeps table rows, overlays, badges, and help screens legible in dark and light terminals
- Global theme switch —
Gcyclesauto,dark, andlightlive without restarting - Auto-retry — timeout models keep getting retried
We welcome contributions — issues, PRs, new provider integrations.
Q: How accurate are the latency numbers?
A: Real round-trip times measured by your machine. Results depend on your network and provider load at that moment.
Q: Can I add a new provider?
A: Yes — see sources.js for the model catalog format.
→ Development guide · Config reference · Tool integrations
MIT © vava
Contributors
vava-nessa ·
erwinh22 ·
whit3rabbit ·
skylaweber ·
PhucTruong-ctrl
Anonymous usage data collected to improve the tool. No personal information ever.
