Lorph is a full-stack, advanced AI chat application designed for interactive communication with various cloud-based Large Language Models (LLMs) via the Ollama framework. It features a robust Node.js backend for deep web research and a responsive React frontend equipped with comprehensive file processing capabilities.
- Multi-Model AI Chat: Seamless interaction with a selection of powerful cloud LLMs through Ollama.
- Deep Web Research Engine: A custom backend engine that parses user intent, generates multiple search queries, executes parallel web searches, and deeply scrapes resulting web pages to synthesize highly accurate, cited responses.
- Comprehensive File Processing: Client-side extraction of text from images (OCR), PDF documents, Microsoft Word (.docx), Excel (.xlsx) files, and plain text/code formats.
- Rich Media & Citations: Responses automatically include inline citations, extracted images, and embedded videos sourced directly from the research phase.
- Modern UI/UX: A responsive, chat-based interface with dynamic model selection, file attachment management, and real-time streaming responses.
The following diagram illustrates the complete data flow from user input to the final synthesized response.
flowchart TB
subgraph Client [Frontend - React / Vite]
UI[User Interface]
FP[File Processor]
MD[Markdown Renderer]
end
subgraph Backend [Node.js / Express Server]
DRE[Deep Research Engine]
IP[Intent Parser]
WS[Web Searcher]
DS[Deep Scraper]
SYN[Synthesizer]
end
subgraph External [External Services]
OLLAMA[Ollama Cloud API]
DDG[DuckDuckGo Lite]
YT[YouTube API]
WEB[Target Websites]
PROXIES[CORS Proxies]
end
UI -- "1. User Prompt + Files" --> FP
FP -- "2. Extract Text (OCR, PDF, DOCX)" --> UI
UI -- "3. Prompt + Context" --> DRE
DRE -- "4. Generate Queries" --> IP
IP <--> OLLAMA
DRE -- "5. Execute Searches" --> WS
WS <--> DDG
WS <--> YT
DRE -- "6. Scrape URLs" --> DS
DS <--> PROXIES
PROXIES <--> WEB
DRE -- "7. Compile Context" --> SYN
SYN -- "8. Generate Final Response" --> OLLAMA
OLLAMA -- "9. Stream Data" --> DRE
DRE -- "10. Stream to Client" --> UI
UI --> MD
MD -- "11. Render Rich Text & Media" --> UI
Follow these steps to get the Lorph application running on your local machine or server.
git clone https://github.com/AL-MARID/Lorph.git
cd LorphLorph requires the Ollama client to connect to cloud models and an API key for authentication.
- Ollama Account: Create an account at Ollama.
- Email Verification: Verify your registered email address.
- Login Credentials: Have your Ollama login credentials readily available.
Install the Ollama client on your local machine.
- Linux & macOS:
curl -fsSL https://ollama.com/install.sh | sh - Windows: Download from ollama.com/download/windows
- Start Ollama Server: In a new terminal, initiate the server process:
ollama serve
- Device Pairing & Login: In a separate terminal, authenticate your device:
Follow the on-screen instructions to open the authentication URL and connect your device.
ollama signin
To enable Lorph to connect with Ollama's cloud models, an API key must be configured.
- Generate API Key: After completing the device pairing, generate a new API key from your Ollama settings: ollama.com/settings/keys.
- Create
.env.localfile: In the root of the Lorph project directory, create a new file named.env.local. - Add API Key: Insert the generated key into the
.env.localfile.OLLAMA_CLOUD_API_KEY=your_api_key_here
Use your preferred package manager to install the necessary project dependencies.
# Using npm
npm install
# Or using pnpm
pnpm install
# Or using yarn
yarn install-
Development Mode:
npm run dev
This starts the Express server and Vite middleware concurrently.
-
Production Build:
npm run build npm start
Access the application by navigating to http://localhost:3000 in your web browser.
The system is configured to interact with a diverse range of powerful, cloud-based LLMs through the Ollama framework.
| Model Name | Description |
|---|---|
deepseek-v3.1:671b-cloud |
A large-scale model for general tasks. |
gpt-oss:20b-cloud |
Open-source GPT variant (20B parameters). |
gpt-oss:120b-cloud |
Open-source GPT variant (120B parameters). |
kimi-k2:1t-cloud |
A massive model known for its context handling. |
qwen3-coder:480b-cloud |
Specialized model for coding and development tasks. |
glm-4.6:cloud |
General language model. |
glm-4.7:cloud |
General language model (updated version). |
minimax-m2:cloud |
High-performance model. |
mistral-large-3:675b-cloud |
A powerful model from the Mistral family. |
Lorph leverages a modern and robust technology stack to deliver its features:
| Category | Technology |
|---|---|
| Frontend Framework | React 19, Vite |
| Backend Server | Node.js, Express |
| Language | TypeScript |
| Styling | Tailwind CSS |
| Icons | Lucide React |
| Markdown Rendering | React Markdown, remark-gfm, React Syntax Highlighter |
| File Processing | PDF.js, Tesseract.js, Mammoth, read-excel-file |
| Web Scraping | Cheerio, node-fetch, youtube-sr |
The application extracts text content from attached files and includes it in the conversation context, enabling the AI to process and respond based on the document's information.
| Format | Processing Method |
|---|---|
| Images (JPEG, PNG) | OCR text extraction via Tesseract.js (English language) |
| Multi-page text extraction via PDF.js with page-by-page output | |
| Word (DOCX) | Raw text extraction via Mammoth |
| Excel (XLSX) | Row-column parsing with pipe-delimited output via read-excel-file |
| Plaintext / Code | Direct file read (TXT, MD, JSON, CSV, JS, TS, PY, etc.) |
The web search functionality in Lorph is engineered with a multi-layered backend architecture to ensure efficient and accurate information retrieval:
- Intent Parsing: The user's query is analyzed by the LLM to generate 5-7 highly specific, expert-level search queries.
- Parallel Acquisition: Executes concurrent searches across DuckDuckGo Lite and YouTube APIs to gather initial URLs and metadata.
- Deep Scraping: Fetches the raw HTML of the discovered URLs. It utilizes proxy rotation (e.g., corsproxy.io, allorigins) to bypass access restrictions.
- Knowledge Extraction: Uses Cheerio to parse the DOM, removing noise (ads, navbars) and extracting core text, OpenGraph images, and embedded videos.
- Synthesis: The compiled context is sent to the LLM with strict instructions to generate a comprehensive response, complete with inline citations and rich media formatting.
Contributions to the Lorph project are welcome. For suggestions, feature enhancements, or bug fixes, please adhere to the following workflow:
- Fork the repository.
- Create a new branch (
git checkout -b feature/YourFeature). - Implement your changes and commit them (
git commit -m 'Add some feature'). - Push your branch to your fork (
git push origin feature/YourFeature). - Open a Pull Request to the main repository.
This project is distributed under the MIT License. Refer to the LICENSE file for complete details.





