A premium, high-speed Artificial Intelligence console designed with a unique, responsive "Sketch" user interface. Engineered for maximum productivity, real-time streaming interactivity, and enterprise-level multimodal AI workflows.
The application is fully deployed on Vercel with live AI streaming powered by Groq + LLaMA 3.3 (70B). No setup required — just open the link and start chatting!
- 💬 Real-time AI Streaming: Lightning-fast token-by-token stream architecture leveraging LLaMA 3.3 (70B) via the highly performant Groq API.
- 🎙️ Voice Intelligence (STT & TTS): Speak commands seamlessly using native browser
SpeechRecognition, and have answers dictated back effortlessly. - 📂 Multi-Modal Document Context (OCR): Contextual depth! Upload multiple PDFs, texts, and explicit image buffers concurrently to analyze dense source material.
- 📝 Intelligent Prompt Branching (Message Editing): Change your mind and re-edit any previous text prompt on the fly, instantly erasing future nodes to fork an entirely new timeline.
- 🎨 In-Chat Image Generation: Seamlessly intercept the
/image [prompt]command to render inline illustrations leveraging Pollinations AI. - 💻 Code Snippet Engine: Advanced Syntax Highlighting natively paired with a frictionless one-click local download feature for code blocks (
.js,.json,.py, etc). - 🌗 Dynamic Theme Switching: Integrated "Moon/Sun" toggle seamlessly inverts the custom-built sharp edge black/white Sketch design layout natively.
- 🛑 Granular Output Control: Total authority to Abort active AI streams dynamically, as well as a one-click Regenerate button to force an alternative response.
- 💾 Persistent Edge Memory: Robust long-term side-bar storage parsing full historical chats via
localStorage.
| Node | Core Technology |
|---|---|
| Frontend | Next.js 14, React (Tailwind v3) |
| Backend | Express.js, Node.js |
| AI Layer | Groq Cloud (LLaMA Series) |
| File Parser | PDF-Parse, FileReader API |
| Styling | Lucide-React, Architect's Daughter Font |
Full-Stack_GenAI-Assistant/
├── Frontend/ # Next.js 14 Interface Client
│ ├── app/ # App Router Core
│ ├── components/ # React Components & Toolboxes
│ └── globals.css # Hardcoded Sketch UI Renderings
└── Backend/ # Express Stateless Server
└── server.js # Dynamic Model Routing & API Gateway
- Node.js
v18+ - Groq API Key → console.groq.com
- Tavily API Key (Optional for search)
cd Frontend
npm install
# Create .env.local and add:
# GROK_API_KEY="your_groq_api_key_here"
npm run devPlatform deploys at http://localhost:3000
cd Backend
npm install
# Configure your .env
# GROK_API_KEY="..."
npm startServer boots on http://localhost:5000
This project is deployed on Vercel using the Next.js serverless architecture.
| Environment | URL |
|---|---|
| Production | https://full-stack-gen-ai-assistant.vercel.app |
| GitHub Repo | https://github.com/Chiranjeeb-Dash-Git/Full-Stack_GenAI-Assistant |
To deploy your own fork:
- Fork this repository
- Import into Vercel
- Set Root Directory to
Frontend - Set Framework Preset to
Next.js - Add
GROK_API_KEYas an Environment Variable - Click Deploy ✅
Developer: Chiranjeeb Dash
Launch: APRIL 2026
Live Demo: https://full-stack-gen-ai-assistant.vercel.app