A modern chat application built with Next.js 16, Supabase, and AI SDK, centered around the "Robot in the Room" paradigm.
e4chat is designed for those who are tired of switching between different AI windows, chats, or models. Instead of separate silos, e4chat brings multiple AI models and multiple humans into the same room at the same time.
- No Switching: Stop jumping between GPT-4, Gemini, and your team chat. Have them all in one place.
- Dynamic Interaction: Invite models to a room, kick them out, or @mention them for specific answers.
- Robot in the Room: AI isn't just a separate tab; it's a participant that can jump in, debate with other models, and assist humans in real-time.
- Multi-Model Support: Invite multiple AI models (GPT-4o, GPT-4o Mini, Gemini, etc.) to the same chat room.
- Multi-User Real-time Chat: Bring people into the room with live indicators for typing, online status, and active participation.
- Dynamic Room Management: Owners can dynamically add or remove models and human participants while the chat is running.
- @Mentions: Direct traffic by mentioning specific models (e.g.,
@GPT-4o) for targeted responses. - Secure Room Sharing: Kahoot-style invite codes, live status, and password protection for private rooms.
- Markdown Rendering: Full support for rich text and code snippets in chat messages.
AI models don't just wait to be called. They can decide to speak based on the conversation flow:
- The Trigger: Every time a human sends a message, there is a chance an AI will jump in.
- The Calculation: The probability increases as the conversation "gap" grows. If humans haven't heard from an AI in a while, the chance of an AI intervening scales up to 80%.
- Contextual Awareness: AIs read the recent transcript to decide if they have something unique to contribute.
To prevent AIs from talking over each other or ignoring each other's input:
- Strict Serialization: We implement a "Global Busy Lock". Only one participant (AI) can "grab the mic" at a time.
- Sequential Context: If multiple AIs are triggered, they are queued. Each subsequent AI reads what the previous ones just said before formulating its response, ensuring a coherent debate.
To prevent models from yapping indefinitely or eating up your API budget:
- Refillable Battery: Rooms have an
ai_tokenspool (e.g., 0/10). - Charging: Every human message "charges" the battery.
- Exhaustion: AI responses consume tokens. If the battery is dead, the AIs stay silent until a human speaks and refills it.
AIs are instructed to stop if a topic is covered and only mention other AIs if explicitly needed. If they ignore these rules, a hard rate limit (e.g., max 10 consecutive AI turns) forces them to stop.
- Node.js: v25.3.0 (Locked in
.nvmrc) - Bun: Latest version
- Supabase CLI: Installed via Homebrew (
brew install supabase/tap/supabase)
Clone the repository and install dependencies:
bun install- Go to Supabase and create a new project.
- Note your Project Ref (Settings > General).
# Login to Supabase CLI
supabase login
# Link the project
supabase link --project-ref <your-project-ref>
# Push existing migrations to the remote database
# This sets up the schema, RLS policies, and triggers
supabase db pushThis app uses GitHub and Google OAuth. Email auth is disabled.
In the Supabase Dashboard:
- GitHub: Enable in Authentication > Providers, add your Client ID/Secret.
- Google: Enable in Authentication > Providers, add your Client ID/Secret.
- URL Configuration: Set Site URL to
http://localhost:3000and add production URLs to Redirect URLs.
Copy .example.env to .env.local and fill in the values:
cp .example.env .env.local# Ensure you are on Node v25.3.0
nvm use
bun devAll the names and identifiable information has been blocked by black bars

To add or modify AI models:
- Open
lib/models.ts. - Add a new entry to the
AVAILABLE_MODELSarray. - Supported providers are
openaiandgoogle. You can create your own provider by implementing theAIProviderinterface. (Note: I am a student and Anthropic's API is too expensive for me! :))
- Push your code to GitHub and import to Vercel.
- Add all environment variables from
.env.localto Vercel Settings. - Update
NEXT_PUBLIC_APP_URLto your deployment URL. - Add the Vercel URL to Supabase's Redirect URLs.
# Fix linting issues
bun lint:fix
# Format code
bun format