Skip to content

Conversation

@smakosh
Copy link
Member

@smakosh smakosh commented Jan 22, 2026

Summary

  • Apply Seven Sweeps copy editing framework to improve clarity, specificity, and conversion
  • Update landing page components (hero, features, CTA, FAQ, pricing)
  • Improve 5 blog posts with better headlines and benefit-focused copy
  • Enhance 3 migration guides (LiteLLM, OpenRouter, Vercel AI Gateway)
  • Update 2 integration guides (Claude Code, OpenCode)

Key Changes

Specificity: Added concrete numbers (180+ models, 60+ providers, 50% lower fees)

So What Bridges: Connected features to outcomes ("no more scattered credentials", "see exactly what each request costs")

Pain Points: Added emotional language about problems being solved ("If you've built with multiple LLM providers, you know the pain")

Proof: Referenced specific comparisons, SLAs, and statistics where available

Clarity: Fixed license inconsistency (MIT → AGPLv3 in FAQ), improved comparison tables

Test plan

  • Preview landing page copy changes
  • Review blog post formatting renders correctly
  • Check migration guide tables display properly
  • Verify integration guide code blocks are intact

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
    • Site copy refreshed across landing pages: hero, features, graph, CTAs, FAQ, and pricing for clearer messaging.
    • Multiple blog posts and guides rewritten to emphasize multi-provider support (180+ models, 60+ providers), cost/latency visibility, and streamlined setup.
    • Migration and self-hosting guides reorganized with updated quick-start steps, benefits, and practical examples.
    • Playground and tutorial content updated for improved onboarding and model-testing guidance.

✏️ Tip: You can customize this high-level summary in your review settings.

Apply Seven Sweeps copy editing framework to improve
clarity, specificity, and conversion across:

- Landing page (hero, features, CTA, FAQ, pricing)
- Blog posts (5 posts updated)
- Migration guides (LiteLLM, OpenRouter, Vercel AI)
- Integration guides (Claude Code, OpenCode)

Key changes:
- Add specific numbers (180+ models, 60+ providers)
- Connect features to outcomes ("so what" bridges)
- Fix license inconsistency (MIT -> AGPLv3)
- Improve comparison tables with clearer benefits
- Add pain point language and emotional resonance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@smakosh smakosh self-assigned this Jan 22, 2026
@smakosh smakosh requested a review from steebchen January 22, 2026 19:00
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 22, 2026

Walkthrough

Updated user-facing copy across landing components, pricing, blog posts, guides, and migration docs to emphasize 180+ models/60+ providers, unified API usage, and per-request cost/latency/token visibility. No public API signatures or runtime control flow were changed.

Changes

Cohort / File(s) Summary
Landing page components
apps/ui/src/components/landing/hero.tsx, apps/ui/src/components/landing/code-example.tsx, apps/ui/src/components/landing/cta.tsx, apps/ui/src/components/landing/faq.tsx, apps/ui/src/components/landing/features.tsx, apps/ui/src/components/landing/graph.tsx, apps/ui/src/components/landing/hero-rsc.tsx
Copy updates: hero headline/subtext, code-example wording about changing base URL/SDK and supported languages, CTA token claim, FAQ model counts and Link usage, features reworded for multi-provider/analytics, graph description updated, minor import reordering in hero-rsc. No behavioral changes.
Pricing
apps/ui/src/components/pricing/pricing-hero.tsx
Updated header and descriptive paragraph to new marketing copy (free tier, Pro fee claim).
Blog posts (announcements & playground)
apps/ui/src/content/blog/2025-04-12-introducing-llm-gateway.md, apps/ui/src/content/blog/2025-10-14-introducing-llmgateway-playground.md
Rewrites to emphasize multi-provider breadth, unified dashboard, cost/latency/token tracking, model comparison, and updated CTAs/metadata.
Blog posts (self-hosting & custom providers)
apps/ui/src/content/blog/2025-05-01-self-host-llm-gateway.md, apps/ui/src/content/blog/2025-05-10-custom-providers.md
Expanded self-hosting marketing, added setup/access tables, step-by-step custom provider guide, and production-oriented links/tips.
Guides (frameworks/integrations)
apps/ui/src/content/guides/claude-code.md, apps/ui/src/content/guides/opencode.md
Updated Quick Start, env var examples, messaging to highlight multi-model routing with preserved SDK compatibility and cost visibility.
Migration guides
apps/ui/src/content/migrations/litellm.md, apps/ui/src/content/migrations/openrouter.md, apps/ui/src/content/migrations/vercel-ai-gateway.md
Rewritten migration narratives with concrete steps, updated code snippets (baseURL/env vars), refreshed comparison tables and self-hosting (AGPLv3) references.
Misc (content metadata/front matter)
apps/ui/src/content/blog/*.md, apps/ui/src/content/*
Numerous front-matter and body copy updates across multiple markdown files (titles, summaries, image alt text, metadata fields).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • #1482: Overlaps on landing copy and migration guide updates; likely touches the same marketing and guide files.
  • #326: Modifies apps/ui/src/components/landing/code-example.tsx implementation — directly related to the code-example copy change.
  • #335: Also modifies apps/ui/src/components/landing/code-example.tsx and rendering — potentially overlapping changes.
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'docs: improve marketing copy across landing and docs' directly and accurately summarizes the main change in the pull request—a comprehensive update of marketing and documentation copy across multiple landing pages, blog posts, and guides.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
apps/ui/src/content/migrations/openrouter.md (1)

140-142: Fix env var name mismatch in Vercel AI SDK example.

The guide sets LLM_GATEWAY_API_KEY earlier, but the Vercel AI SDK snippet reads LLMGATEWAY_API_KEY, which will be undefined for readers following the steps.

🔧 Proposed fix
-  apiKey: process.env.LLMGATEWAY_API_KEY,
+  apiKey: process.env.LLM_GATEWAY_API_KEY,
apps/ui/src/content/migrations/vercel-ai-gateway.md (1)

16-31: Fix import diff and align model naming with detailed examples.

The diff has two clarity issues:

  1. Line 19: The generateText import is marked as an addition (+), but this import should already exist in the "before" state since it's required by the Vercel AI SDK regardless of which provider you use.

  2. Model naming inconsistency: This quick example uses "gpt-5.2" (lines 27-28) without a provider prefix, but the detailed migration examples later (lines 92, 97, 137, 159, 221) consistently use provider-prefixed format like "openai/gpt-4o". Additionally, "gpt-5.2" appears to be a placeholder rather than a real model name.

📝 Proposed fix
 ```diff
 - import { openai } from "@ai-sdk/openai";
 - import { anthropic } from "@ai-sdk/anthropic";
+  import { generateText } from "ai";
 + import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
 
 + const llmgateway = createLLMGateway({
 +   apiKey: process.env.LLM_GATEWAY_API_KEY
 + });
 
 const { text } = await generateText({
--   model: openai("gpt-5.2"),
-+   model: llmgateway("gpt-5.2"),
+-   model: openai("gpt-4o"),
++   model: llmgateway("openai/gpt-4o"),
   prompt: "Hello!"
 });
</details>

</blockquote></details>
<details>
<summary>apps/ui/src/content/migrations/litellm.md (1)</summary><blockquote>

`273-286`: **Fix repository URL and setup commands — these will fail for users.**

The self-hosting instructions contain critical errors:

1. **Line 278**: Repository URL is incorrect. The repository `https://github.com/llmgateway/llmgateway` returns a 404. The correct URL is `https://github.com/theopenco/llmgateway`.

2. **Line 280**: The command should be `pnpm run setup`, not `pnpm setup`. The latter does not exist in the project.

Update the snippet:
```bash
git clone https://github.com/theopenco/llmgateway.git
cd llmgateway
pnpm i
pnpm run setup
pnpm dev
apps/ui/src/content/guides/claude-code.md (1)

23-31: Align auth token naming between Quick Start and curl example.

Quick Start uses ANTHROPIC_AUTH_TOKEN, but the curl example uses LLM_GATEWAY_API_KEY. This inconsistency can lead to copy/paste failures. Pick one variable name and use it everywhere.

🔧 Suggested fix (pick one variable name and align both sections)
-  -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \
+  -H "Authorization: Bearer $ANTHROPIC_AUTH_TOKEN" \

Also applies to: 93-103

🤖 Fix all issues with AI agents
In `@apps/ui/src/components/landing/code-example.tsx`:
- Around line 236-241: The paragraph text "Already using OpenAI's SDK? Change
one line—your base URL—and you're done. Works with Python, TypeScript, Go, and 6
more languages." in the JSX should be updated to avoid the incorrect numeric
language count; edit the string in the <p> element (the paragraph in
code-example.tsx) to either remove "and 6 more languages" or replace it with a
non-quantified phrase such as "and other languages" or explicitly list the
supported examples to match the actual examples shown.

In `@apps/ui/src/components/landing/faq.tsx`:
- Around line 87-93: In apps/ui/src/components/landing/faq.tsx replace the raw
internal anchor (<a href="/models" className="underline">...</a>) with Next.js
client-side navigation using the Link component: import Link from 'next/link' if
not already present, wrap the link text with <Link href="/models"
className="underline">models page</Link> (or use <Link href="/models"><a
className="underline">...</a></Link> depending on your Next.js version),
preserving the existing className and children; update any surrounding JSX in
the FAQ component accordingly to use Link for internal navigation.

In `@apps/ui/src/content/blog/2025-05-01-self-host-llm-gateway.md`:
- Around line 45-56: The services table under the "Access Your Instance" section
is missing an entry for port 3003 which is exposed in the Docker configuration;
either add a new table row for the service listening on http://localhost:3003
with a concise Description (e.g., "Realtime websocket/metrics" or the actual
component name used in your Docker compose), or add a short note directly below
the table explaining why port 3003 is exposed but not user-facing (reference the
Docker-exposed port 3003 and the "Access Your Instance" table rows such as
Gateway/http://localhost:4001 to keep format consistent).

In
`@apps/ui/src/content/blog/2025-09-08-how-configure-claude-code-with-llmgateway.md`:
- Line 19: Update the phrasing in the line "**Use any model** — GPT-5, Gemini,
Llama, or 180+ others with tool calling support" to hyphenate the compound
modifier by changing "tool calling support" to "tool-calling support" so the
compound adjective correctly modifies "support".

In `@apps/ui/src/content/blog/2025-10-14-introducing-llmgateway-playground.md`:
- Line 33: Replace the deprecated model string "gemini-2.5-flash-image-preview"
with the stable model name "gemini-2.5-flash-image" in the blog copy (the
sentence that begins "Select an image model..."); search for any other
occurrences of "gemini-2.5-flash-image-preview" in the document and update them
to "gemini-2.5-flash-image" to ensure all references use the correct, current
model identifier.

In `@apps/ui/src/content/guides/claude-code.md`:
- Line 11: Change the phrase "Full cost tracking" to the hyphenated compound
adjective "Full-cost tracking" where it directly modifies "tracking" in the copy
(replace the existing "Full cost tracking in your dashboard." sentence with
"Full-cost tracking in your dashboard."); ensure the rest of the sentence and
punctuation remain unchanged.

In `@apps/ui/src/content/migrations/litellm.md`:
- Around line 265-272: The bullet under the "## What Changes After Migration"
section is contradictory: the bold text "**New models immediately**" conflicts
with the following "within 48 hours"; update the bullet (the line containing "-
**New models immediately** — Access new releases within 48 hours, no deployment
needed") to use consistent timing language — either change the bold phrase to
"**New models within 48 hours**" or replace "within 48 hours" with "immediately"
(and remove any qualifying delay), and ensure the explanatory text matches the
chosen claim.

In `@apps/ui/src/content/migrations/openrouter.md`:
- Around line 25-33: Update all LLM Gateway Pro fee mentions from 2.5% to 1%: in
the table replace "**2.5%** (50% lower)" with "**1%** (80% lower) and change any
inline comparisons "2.5% vs 5%" to "1% vs 5%" and "2.5% vs OpenRouter's 5%" to
"1% vs OpenRouter's 5%"; verify these textual edits in this markdown and ensure
they match the canonical constant PRO_PLAN_FEE_PERCENTAGE
(packages/shared/src/fees.ts = 0.01). Also address the OpenRouter model-count
claim by either removing or clarifying "300+ models" (e.g., mark as
gateway-reported or remove the specific count) to avoid an unsupported absolute
model-hosting assertion.
🧹 Nitpick comments (1)
apps/ui/src/content/blog/2025-05-01-self-host-llm-gateway.md (1)

57-65: LGTM! Clear benefit articulation.

The expanded benefits section effectively communicates the value of self-hosting with scannable bullet points. Each benefit is concrete and compelling.

Optional: Line 64 references "the hosted version" without prior context. If readers aren't aware of a hosted option, consider briefly noting "LLM Gateway's cloud offering" for clarity—though this is a minor point.

Comment on lines +45 to +56
### Access Your Instance

Once running, your services are available at:

| Service | URL | Description |
| ------- | --------------------- | ------------------------------ |
| Web UI | http://localhost:3002 | Dashboard and analytics |
| Docs | http://localhost:3005 | Local documentation |
| Admin | http://localhost:3006 | Platform administration |
| API | http://localhost:4002 | Management API |
| Gateway | http://localhost:4001 | LLM request gateway (use this) |

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for port 3003 references in the documentation
rg "3003" --type md -C 3

Repository: theopenco/llmgateway

Length of output: 184


🏁 Script executed:

# Also check the specific file and the Docker command around line 23
cat -n apps/ui/src/content/blog/2025-05-01-self-host-llm-gateway.md | head -60

Repository: theopenco/llmgateway

Length of output: 2443


Document or remove port 3003: Exposed in Docker but missing from services table.

Line 23 exposes port 3003, but it's not listed in the services table (lines 49-55). Either add this service to the table with its description, or clarify in the documentation why it's exposed but not user-facing.

🤖 Prompt for AI Agents
In `@apps/ui/src/content/blog/2025-05-01-self-host-llm-gateway.md` around lines 45
- 56, The services table under the "Access Your Instance" section is missing an
entry for port 3003 which is exposed in the Docker configuration; either add a
new table row for the service listening on http://localhost:3003 with a concise
Description (e.g., "Realtime websocket/metrics" or the actual component name
used in your Docker compose), or add a short note directly below the table
explaining why port 3003 is exposed but not user-facing (reference the
Docker-exposed port 3003 and the "Access Your Instance" table rows such as
Gateway/http://localhost:4001 to keep format consistent).

- **Model Diversity**: Access models from OpenAI, Google, Cohere, and more
- **Cost Optimization**: Choose the most cost-effective model for your tasks
- **Unified Interface**: Single configuration for all providers
- **Use any model** — GPT-5, Gemini, Llama, or 180+ others with tool calling support
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hyphenate compound modifier.

When "tool calling" modifies "support," it should be hyphenated as "tool-calling support" per standard grammar rules for compound adjectives.

✏️ Proposed fix
-- **Use any model** — GPT-5, Gemini, Llama, or 180+ others with tool calling support
+- **Use any model** — GPT-5, Gemini, Llama, or 180+ others with tool-calling support
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- **Use any model** — GPT-5, Gemini, Llama, or 180+ others with tool calling support
- **Use any model** — GPT-5, Gemini, Llama, or 180+ others with tool-calling support
🧰 Tools
🪛 LanguageTool

[grammar] ~19-~19: Use a hyphen to join words.
Context: ... Gemini, Llama, or 180+ others with tool calling support - Cut costs — Use GP...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
In
`@apps/ui/src/content/blog/2025-09-08-how-configure-claude-code-with-llmgateway.md`
at line 19, Update the phrasing in the line "**Use any model** — GPT-5, Gemini,
Llama, or 180+ others with tool calling support" to hyphenate the compound
modifier by changing "tool calling support" to "tool-calling support" so the
compound adjective correctly modifies "support".

### Image Generation

Generate stunning images directly in the chat interface. Simply select an image generation model like `gemini-2.5-flash-image-preview` or any other compatible model, describe what you want to create, and watch the AI bring your ideas to life. All generated images are displayed inline with zoom support for detailed viewing.
Select an image model (like `gemini-2.5-flash-image-preview`), describe what you want, and see results inline. Great for prototyping visual content before integrating image generation into your app.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Gemini image generation models available January 2026 Google API

💡 Result:

As of January 22, 2026 the Gemini image-generation models accessible via Google’s APIs (Vertex AI / Gemini API) are:

  • gemini-2.5-flash-image — stable, supported for image generation/editing (the main Gemini 2.5 image model). [1][2]
  • gemini-3-pro-image-preview — Gemini 3 Pro image-generation preview (launched Nov 2025; preview status). [1][3]
  • Nano Banana / Nano Banana Pro — Google’s product names for Gemini-native image generators (Nano Banana maps to Gemini 2.5 image family; Nano Banana Pro is integrated with Gemini 3 Pro). Available through the Gemini API / Vertex UI depending on subscription/preview access. [2][3]

Notes: some older image-preview model IDs (e.g., gemini-2.0-flash-preview-image-generation / earlier preview variants) have been deprecated or scheduled for retirement; migrate to the 2.5/3-series image models where applicable. [3]

Sources: Google Vertex AI image-generation docs and Gemini API release notes. [1][2][3]

References
[1] Google Cloud — Generate and edit images with Gemini (Vertex AI).
[2] Google AI / Gemini API — Meet the models (Gemini image generation / Nano Banana).
[3] Gemini API release notes (Nov–Dec 2025).


Update Gemini model name to gemini-2.5-flash-image.

The model referenced as gemini-2.5-flash-image-preview does not exist. As of January 2026, the correct stable model for Gemini image generation is gemini-2.5-flash-image. Older preview variants have been deprecated; use the 2.5 or 3 series stable/preview models instead.

🤖 Prompt for AI Agents
In `@apps/ui/src/content/blog/2025-10-14-introducing-llmgateway-playground.md` at
line 33, Replace the deprecated model string "gemini-2.5-flash-image-preview"
with the stable model name "gemini-2.5-flash-image" in the blog copy (the
sentence that begins "Select an image model..."); search for any other
occurrences of "gemini-2.5-flash-image-preview" in the document and update them
to "gemini-2.5-flash-image" to ensure all references use the correct, current
model identifier.

LLM Gateway provides a native Anthropic-compatible endpoint at `/v1/messages` that allows you to use any model in our catalog while maintaining the familiar Anthropic API format. This is especially useful for Claude Code users who want to access models beyond Claude.
Claude Code is locked to Anthropic's API by default. With LLM Gateway, you can point it at any model—GPT-5, Gemini, Llama, or 180+ others—while keeping the same Anthropic API format Claude Code expects.

Three environment variables. No code changes. Full cost tracking in your dashboard.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor copy tweak: hyphenate compound adjective.

Consider “Full-cost tracking” if it modifies “tracking” as a compound adjective.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~11-~11: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...environment variables. No code changes. Full cost tracking in your dashboard. ## Video T...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

🤖 Prompt for AI Agents
In `@apps/ui/src/content/guides/claude-code.md` at line 11, Change the phrase
"Full cost tracking" to the hyphenated compound adjective "Full-cost tracking"
where it directly modifies "tracking" in the copy (replace the existing "Full
cost tracking in your dashboard." sentence with "Full-cost tracking in your
dashboard."); ensure the rest of the sentence and punctuation remain unchanged.

Comment on lines +265 to 272
## What Changes After Migration

- **No Infrastructure Management**: No proxy servers to maintain or scale
- **Built-in Analytics**: View costs, latency, and usage in the dashboard
- **Response Caching**: Automatic caching reduces costs
- **Web Dashboard**: Manage API keys and view analytics without CLI
- **Automatic Updates**: New models available immediately
- **No servers to babysit** — We handle scaling, uptime, and updates
- **Real-time cost visibility** — See what every request costs, broken down by model
- **Automatic caching** — Repeated requests hit cache, reducing your spend
- **Web-based management** — No more editing YAML files for config changes
- **New models immediately** — Access new releases within 48 hours, no deployment needed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify the contradictory timeframe claim.

The benefits list effectively communicates value, but Line 271 contains a contradiction:

  • "New models immediately" suggests instant availability
  • "within 48 hours" directly contradicts "immediately"

These claims are mutually exclusive. Consider using consistent language:

📝 Suggested clarification
-- **New models immediately** — Access new releases within 48 hours, no deployment needed
+- **Fast model updates** — Access new releases within 48 hours, no deployment needed

Or, if truly immediate:

-- **New models immediately** — Access new releases within 48 hours, no deployment needed
+- **New models immediately** — Access new releases as soon as they launch, no deployment needed
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## What Changes After Migration
- **No Infrastructure Management**: No proxy servers to maintain or scale
- **Built-in Analytics**: View costs, latency, and usage in the dashboard
- **Response Caching**: Automatic caching reduces costs
- **Web Dashboard**: Manage API keys and view analytics without CLI
- **Automatic Updates**: New models available immediately
- **No servers to babysit** — We handle scaling, uptime, and updates
- **Real-time cost visibility** — See what every request costs, broken down by model
- **Automatic caching** — Repeated requests hit cache, reducing your spend
- **Web-based management** — No more editing YAML files for config changes
- **New models immediately** — Access new releases within 48 hours, no deployment needed
## What Changes After Migration
- **No servers to babysit** — We handle scaling, uptime, and updates
- **Real-time cost visibility** — See what every request costs, broken down by model
- **Automatic caching** — Repeated requests hit cache, reducing your spend
- **Web-based management** — No more editing YAML files for config changes
- **Fast model updates** — Access new releases within 48 hours, no deployment needed
Suggested change
## What Changes After Migration
- **No Infrastructure Management**: No proxy servers to maintain or scale
- **Built-in Analytics**: View costs, latency, and usage in the dashboard
- **Response Caching**: Automatic caching reduces costs
- **Web Dashboard**: Manage API keys and view analytics without CLI
- **Automatic Updates**: New models available immediately
- **No servers to babysit** — We handle scaling, uptime, and updates
- **Real-time cost visibility** — See what every request costs, broken down by model
- **Automatic caching** — Repeated requests hit cache, reducing your spend
- **Web-based management** — No more editing YAML files for config changes
- **New models immediately** — Access new releases within 48 hours, no deployment needed
## What Changes After Migration
- **No servers to babysit** — We handle scaling, uptime, and updates
- **Real-time cost visibility** — See what every request costs, broken down by model
- **Automatic caching** — Repeated requests hit cache, reducing your spend
- **Web-based management** — No more editing YAML files for config changes
- **New models immediately** — Access new releases as soon as they launch, no deployment needed
🤖 Prompt for AI Agents
In `@apps/ui/src/content/migrations/litellm.md` around lines 265 - 272, The bullet
under the "## What Changes After Migration" section is contradictory: the bold
text "**New models immediately**" conflicts with the following "within 48
hours"; update the bullet (the line containing "- **New models immediately** —
Access new releases within 48 hours, no deployment needed") to use consistent
timing language — either change the bold phrase to "**New models within 48
hours**" or replace "within 48 hours" with "immediately" (and remove any
qualifying delay), and ensure the explanatory text matches the chosen claim.

Comment on lines +25 to +33
| Feature | OpenRouter | LLM Gateway |
| ------------------------ | ---------------------------- | ------------------------- |
| Gateway fee (Pro) | 5% | **2.5%** (50% lower) |
| OpenAI-compatible API | Yes | Yes |
| Model coverage | 300+ models | 180+ models |
| Analytics dashboard | Via third-party integrations | **Built-in, per-request** |
| Required headers | HTTP-Referer, X-Title | **Just Authorization** |
| Self-hosting option | No | **Yes (AGPLv3)** |
| Anthropic-compatible API | No | **Yes (/v1/messages)** |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n apps/ui/src/content/migrations/openrouter.md | head -200

Repository: theopenco/llmgateway

Length of output: 7896


🌐 Web query:

OpenRouter Pro pricing gateway fee percentage 2025 2026

💡 Result:

As of Jan 22, 2026: OpenRouter charges a 5.5% fee (minimum $0.80) on credit purchases, and a 5% gateway/BYOK usage fee for BYOK usage after the first 1M free requests per month. [1][2]

Sources:
[1] OpenRouter FAQ — Pricing and Fees.
[2] FitGap OpenRouter pricing summary.


🌐 Web query:

OpenRouter API pricing models available count

💡 Result:

OpenRouter does not offer a fixed set of subscription "pricing models" to count — it uses a credit-based, pay-as-you-go system that passes through per-model (per‑token or per‑request) pricing for each underlying model/provider. [1][2]

Sources:

  • OpenRouter docs (FAQ: billing, credits, per-model pricing). [1]
  • OpenRouter models/pricing overview (models page describing per-model pricing). [2]

🌐 Web query:

LLM Gateway pricing model count 2025 2026

💡 Result:

  • Nov 2025 snapshot: 139 models. [1]
  • Jan 2026 snapshot: 188 models (LLM Gateway marketing/pricing also lists "180+ unique models" as of mid-Jan 2026). [2][3]

Sources: LLM Gateway models listing (Nov 2025) [1]; LLM Gateway models listing (Jan 2026) [2]; LLM Gateway pricing page (Jan 2026) [3].


🏁 Script executed:

# Search for LLM Gateway pricing or product info in the repo
rg -i "pricing|fee|model.*count|gateway.*fee" --type md --type ts --type js --type json -A 2 -B 2 | head -100

Repository: theopenco/llmgateway

Length of output: 5586


🏁 Script executed:

# Search for compare page and pricing documentation
find . -type f -name "*compare*" -o -name "*pricing*" | grep -E "\.(md|ts|json)$"

Repository: theopenco/llmgateway

Length of output: 46


🏁 Script executed:

# Check if there's a models endpoint or pricing info in the codebase
rg -i "180|188|model.*count" --type ts --type json --type md -A 1 -B 1 | head -50

Repository: theopenco/llmgateway

Length of output: 3162


🌐 Web query:

LLM Gateway Pro plan gateway fee actual percentage current pricing

💡 Result:

As of January 22, 2026, the LLM Gateway Pro plan charges a 1% gateway/platform fee on credit usage (Pro pricing page and Dec 19, 2025 update). [1][2]

Sources:

  • LLM Gateway Pricing page (shows Pro: 1% on credit usage). [1]
  • LLM Gateway blog (notes Pro fees changed to 1% effective Dec 19, 2025). [2]

Update LLM Gateway Pro fee from 2.5% to 1% across all mentions.

Web search confirms LLM Gateway's Pro plan charges 1% as of December 19, 2025 (verified January 22, 2026), and the codebase (packages/shared/src/fees.ts) shows PRO_PLAN_FEE_PERCENTAGE = 0.01. This means:

  • Line 27: Change "2.5% (50% lower)" to "1% (80% lower)"
  • Line 10: Change "2.5% vs 5%" to "1% vs 5%"
  • Line 181: Change "2.5% vs OpenRouter's 5%" to "1% vs OpenRouter's 5%"

The "180+ models" claim for LLM Gateway is accurate (188 as of January 2026). However, OpenRouter's "300+ models" claim is not verifiable since OpenRouter is a gateway service rather than a model host—consider clarifying or removing this specific count.

🤖 Prompt for AI Agents
In `@apps/ui/src/content/migrations/openrouter.md` around lines 25 - 33, Update
all LLM Gateway Pro fee mentions from 2.5% to 1%: in the table replace "**2.5%**
(50% lower)" with "**1%** (80% lower) and change any inline comparisons "2.5% vs
5%" to "1% vs 5%" and "2.5% vs OpenRouter's 5%" to "1% vs OpenRouter's 5%";
verify these textual edits in this markdown and ensure they match the canonical
constant PRO_PLAN_FEE_PERCENTAGE (packages/shared/src/fees.ts = 0.01). Also
address the OpenRouter model-count claim by either removing or clarifying "300+
models" (e.g., mark as gateway-reported or remove the specific count) to avoid
an unsupported absolute model-hosting assertion.

Comment on lines 240 to 241
Already using OpenAI's SDK? Change one line—your base URL—and you're
done. Works with Python, TypeScript, Go, and 6 more languages.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Already using OpenAI's SDK? Change one line—your base URL—and you're
done. Works with Python, TypeScript, Go, and 6 more languages.
Already using OpenAI's SDK? Change one line—your base URL—and you're
done. Works with any language or framework.

- Replace raw anchor tag with Next.js Link component in FAQ
- Update marketing copy to simplify language compatibility message

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@steebchen steebchen enabled auto-merge January 22, 2026 20:55
@steebchen steebchen added this pull request to the merge queue Jan 22, 2026
Merged via the queue into main with commit 94610eb Jan 22, 2026
7 checks passed
@steebchen steebchen deleted the copy/improve-marketing-copy branch January 22, 2026 21:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants