Visual RAG Pipeline Builder Powered by LangGraph
Transform complex retrieval-augmented generation workflows into intuitive visual diagrams. Build, test, and deploy production-ready RAG systems without writing a single line of code—unless you want to export it.
Most RAG implementations fall into two extremes: oversimplified templates that lack flexibility, or complex code-first approaches that require deep technical expertise. Flowragen bridges this gap by providing a visual interface that maintains the power and flexibility of LangGraph while remaining accessible to non-engineers.
The result? Faster prototyping, clearer architecture visualization, and production-ready code generation—all from a drag-and-drop interface.
Construct RAG pipelines using an intuitive node-based editor. Each node represents a discrete operation in your pipeline—document loading, text splitting, embedding generation, vector storage, retrieval, and LLM inference.
Execute workflows directly in the browser with live execution traces. Watch data flow through your pipeline, inspect intermediate outputs, and debug bottlenecks in real-time.
Export your visual workflow as executable Python code using LangGraph. The generated code is production-ready, fully documented, and can be integrated into existing systems without modification.
- Demo Mode: Test and prototype without API keys using mock responses
- Production Mode: Connect to real LLM providers (Groq, OpenAI, Anthropic) for live execution
User Input → Visual Editor → Workflow JSON → Backend Validation
↓
Graph Construction
↓
Node Execution
↓
Execution Trace + Output
↓
Frontend Display
- React 18.2 - Component architecture with hooks
- Vite 7.2 - Lightning-fast build tooling
- React Router 6.20 - Client-side routing
- Lucide React - Icon system
- Custom Canvas Engine - Node-based visual editor with zoom/pan
- FastAPI 0.104+ - High-performance async API framework
- LangChain 0.1+ - LLM orchestration framework
- LangGraph 0.0.26+ - Stateful multi-actor applications
- Pydantic 2.5+ - Data validation and settings management
- Groq - Primary LLM provider (Llama 3.1 70B)
- OpenAI - Alternative LLM provider (GPT-4)
- Anthropic - Alternative LLM provider (Claude 3)
- Sentence Transformers - Local embedding generation
- FAISS - Vector similarity search
- Chroma - Alternative vector database
- PyPDF 3.17+ - PDF parsing
- python-docx 1.1+ - Word document processing
- tiktoken 0.5+ - Token counting and management
flowragen/
├── backend/
│ ├── main.py # FastAPI application entry point
│ ├── workflow_executor.py # LangGraph pipeline execution engine
│ ├── nodes.py # Node implementations (12 types)
│ ├── config.py # Configuration and environment management
│ ├── requirements.txt # Python dependencies
│ └── .env # Environment variables (API keys)
│
├── src/
│ ├── pages/
│ │ ├── LandingPage.jsx # Marketing landing page
│ │ ├── WorkflowBuilder.jsx # Main canvas editor
│ │ ├── FeaturesPage.jsx # Feature showcase
│ │ └── AboutPage.jsx # About section
│ ├── components/
│ │ └── Toast.jsx # Notification system
│ ├── config/
│ │ └── nodeConfigs.js # Node type definitions
│ ├── App.jsx # Root component with routing
│ └── main.jsx # Application entry point
│
├── public/
│ └── flowragen-logo.svg # Brand logo
│
├── package.json # Node.js dependencies
├── vite.config.js # Vite configuration
└── index.html # HTML entry point
Flowragen provides 12 specialized node types organized into 5 categories:
- Document Loader - Import PDF, DOCX, TXT files
- Text Input - Manual text entry for queries
- Text Splitter - Chunk documents with configurable size and overlap
- Embedder - Generate vector embeddings (OpenAI, HuggingFace, local)
- Summarizer - Create document summaries using LLMs
- Vector Store - Persist embeddings in FAISS or Chroma
- Retriever - Semantic search with configurable top-k
- Ranker - Re-rank retrieved documents by relevance
- Prompt Template - Structure prompts with variables
- LLM Answer - Generate responses using GPT-4, Llama, or Claude
- Output Visualizer - Format and display results
- JSON Output - Export structured data
-
Open the Workflow Builder Navigate to
http://localhost:3000/workflow -
Add Nodes Drag nodes from the left palette onto the canvas:
- Document Loader (to load your data)
- Text Splitter (to chunk documents)
- Embedder (to create vectors)
- Vector Store (to persist embeddings)
- Retriever (to search semantically)
- LLM Answer (to generate responses)
- Output Visualizer (to display results)
-
Connect Nodes Click the output port (right side) of one node, then click the input port (left side) of the next node to create connections.
-
Configure Nodes Click any node to open the properties panel. Adjust parameters like:
- Chunk size and overlap for Text Splitter
- Embedding model for Embedder
- Top-k results for Retriever
- Temperature and model for LLM Answer
-
Execute Workflow Click the "Run Workflow" button. View real-time execution traces and inspect outputs at each stage.
-
Export Code Click "Export Code" to download production-ready Python code using LangGraph.
Document Loader → Text Splitter → Embedder → Vector Store
↓
Retriever
↓
LLM Answer
↓
Output Visualizer
This pipeline:
- Loads documents from your filesystem
- Splits them into manageable chunks
- Generates embeddings for semantic search
- Stores vectors in FAISS
- Retrieves relevant context for queries
- Generates answers using an LLM
- Displays formatted results
POST /api/execute Execute a workflow with the provided configuration.
Request:
{
"nodes": [
{"id": "node-1", "type": "document-loader", "config": {...}},
{"id": "node-2", "type": "text-splitter", "config": {...}}
],
"edges": [
{"from": "node-1", "to": "node-2"}
],
"query": "What is this document about?"
}Response:
{
"status": "success",
"execution_time": 2.34,
"trace": [...],
"output": "This document discusses..."
}POST /api/validate Validate workflow configuration before execution.
POST /api/export Generate Python code from workflow definition.
GET /api/nodes List all available node types and their configurations.
GET /api/templates Retrieve pre-built workflow templates.
Full API documentation available at http://localhost:8000/docs
Create backend/.env with the following:
# LLM Provider (groq, openai, anthropic)
DEFAULT_LLM_PROVIDER=groq
# API Keys
GROQ_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
# Model Selection
GROQ_MODEL=llama-3.1-70b-versatile
OPENAI_MODEL=gpt-4-turbo
EMBEDDING_MODEL=text-embedding-ada-002
# Vector Store
VECTOR_STORE_TYPE=faiss
VECTOR_STORE_PATH=./vector_stores
# Application
DEBUG=True
LOG_LEVEL=INFO| Provider | Speed | Cost | Best For |
|---|---|---|---|
| Groq | Fastest | Free | Development, prototyping |
| OpenAI | Fast | Paid | Production, complex reasoning |
| Anthropic | Medium | Paid | Long context, analysis |
Netlify/Vercel
npm run build
# Deploy the dist/ folderStatic Hosting
python -m http.server 3000
# Or use nginx, Apache, etc.Docker
FROM python:3.11-slim
WORKDIR /app
COPY backend/requirements.txt .
RUN pip install -r requirements.txt
COPY backend/ .
CMD ["python", "main.py"]Railway/Heroku
# Add Procfile
web: cd backend && python main.pyEnvironment Variables Ensure all API keys are set in your deployment platform's environment configuration.
- Embedding Caching: Store generated embeddings to avoid recomputation
- Batch Processing: Process multiple documents in parallel
- Vector Store Indexing: Use FAISS IVF indices for large datasets
- LLM Response Streaming: Stream responses for better UX
- Connection Pooling: Reuse HTTP connections to LLM providers
- Horizontal Scaling: Deploy multiple backend instances behind a load balancer
- Async Execution: FastAPI's async support handles concurrent requests efficiently
- Vector Store Sharding: Distribute embeddings across multiple FAISS indices
- Caching Layer: Add Redis for frequently accessed data
See LICENSE for details.
Built with:
- LangChain - LLM orchestration
- LangGraph - Stateful agent framework
- FastAPI - Modern Python web framework
- React - UI library
- FAISS - Vector similarity search
Built for the RAG community by developers who believe in visual thinking.
Make RAG accessible. Make RAG visual. Make RAG powerful.