Skip to content

AL-MARID/Lorph

Repository files navigation

Lorph

Lorph

  1. Lorph1
  2. Lorph2
  3. Lorph3
  4. Lorph4
  5. Lorph5

Lorph is a full-stack, advanced AI chat application designed for interactive communication with various cloud-based Large Language Models (LLMs) via the Ollama framework. It features a robust Node.js backend for deep web research and a responsive React frontend equipped with comprehensive file processing capabilities.

Key Features

  • Multi-Model AI Chat: Seamless interaction with a selection of powerful cloud LLMs through Ollama.
  • Deep Web Research Engine: A custom backend engine that parses user intent, generates multiple search queries, executes parallel web searches, and deeply scrapes resulting web pages to synthesize highly accurate, cited responses.
  • Comprehensive File Processing: Client-side extraction of text from images (OCR), PDF documents, Microsoft Word (.docx), Excel (.xlsx) files, and plain text/code formats.
  • Rich Media & Citations: Responses automatically include inline citations, extracted images, and embedded videos sourced directly from the research phase.
  • Modern UI/UX: A responsive, chat-based interface with dynamic model selection, file attachment management, and real-time streaming responses.

System Architecture & Workflow

The following diagram illustrates the complete data flow from user input to the final synthesized response.

flowchart TB
    subgraph Client [Frontend - React / Vite]
        UI[User Interface]
        FP[File Processor]
        MD[Markdown Renderer]
    end

    subgraph Backend [Node.js / Express Server]
        DRE[Deep Research Engine]
        IP[Intent Parser]
        WS[Web Searcher]
        DS[Deep Scraper]
        SYN[Synthesizer]
    end

    subgraph External [External Services]
        OLLAMA[Ollama Cloud API]
        DDG[DuckDuckGo Lite]
        YT[YouTube API]
        WEB[Target Websites]
        PROXIES[CORS Proxies]
    end

    UI -- "1. User Prompt + Files" --> FP
    FP -- "2. Extract Text (OCR, PDF, DOCX)" --> UI
    UI -- "3. Prompt + Context" --> DRE
    
    DRE -- "4. Generate Queries" --> IP
    IP <--> OLLAMA
    
    DRE -- "5. Execute Searches" --> WS
    WS <--> DDG
    WS <--> YT
    
    DRE -- "6. Scrape URLs" --> DS
    DS <--> PROXIES
    PROXIES <--> WEB
    
    DRE -- "7. Compile Context" --> SYN
    SYN -- "8. Generate Final Response" --> OLLAMA
    OLLAMA -- "9. Stream Data" --> DRE
    
    DRE -- "10. Stream to Client" --> UI
    UI --> MD
    MD -- "11. Render Rich Text & Media" --> UI
Loading

Installation and Setup

Follow these steps to get the Lorph application running on your local machine or server.

Step 1: Clone the Repository

git clone https://github.com/AL-MARID/Lorph.git
cd Lorph

Step 2: Configure Ollama and API Key

Lorph requires the Ollama client to connect to cloud models and an API key for authentication.

Prerequisites

  1. Ollama Account: Create an account at Ollama.
  2. Email Verification: Verify your registered email address.
  3. Login Credentials: Have your Ollama login credentials readily available.

Manual Ollama Installation

Install the Ollama client on your local machine.

Ollama Server and Login

  1. Start Ollama Server: In a new terminal, initiate the server process:
    ollama serve
  2. Device Pairing & Login: In a separate terminal, authenticate your device:
    ollama signin
    Follow the on-screen instructions to open the authentication URL and connect your device.

API Key Configuration

To enable Lorph to connect with Ollama's cloud models, an API key must be configured.

  1. Generate API Key: After completing the device pairing, generate a new API key from your Ollama settings: ollama.com/settings/keys.
  2. Create .env.local file: In the root of the Lorph project directory, create a new file named .env.local.
  3. Add API Key: Insert the generated key into the .env.local file.
    OLLAMA_CLOUD_API_KEY=your_api_key_here

Step 3: Install Dependencies

Use your preferred package manager to install the necessary project dependencies.

# Using npm
npm install

# Or using pnpm
pnpm install

# Or using yarn
yarn install

Step 4: Build and Run the Application

  1. Development Mode:

    npm run dev

    This starts the Express server and Vite middleware concurrently.

  2. Production Build:

    npm run build
    npm start

Access the application by navigating to http://localhost:3000 in your web browser.

Technical Details

Supported Cloud Models

The system is configured to interact with a diverse range of powerful, cloud-based LLMs through the Ollama framework.

Model Name Description
deepseek-v3.1:671b-cloud A large-scale model for general tasks.
gpt-oss:20b-cloud Open-source GPT variant (20B parameters).
gpt-oss:120b-cloud Open-source GPT variant (120B parameters).
kimi-k2:1t-cloud A massive model known for its context handling.
qwen3-coder:480b-cloud Specialized model for coding and development tasks.
glm-4.6:cloud General language model.
glm-4.7:cloud General language model (updated version).
minimax-m2:cloud High-performance model.
mistral-large-3:675b-cloud A powerful model from the Mistral family.

Technology Stack

Lorph leverages a modern and robust technology stack to deliver its features:

Category Technology
Frontend Framework React 19, Vite
Backend Server Node.js, Express
Language TypeScript
Styling Tailwind CSS
Icons Lucide React
Markdown Rendering React Markdown, remark-gfm, React Syntax Highlighter
File Processing PDF.js, Tesseract.js, Mammoth, read-excel-file
Web Scraping Cheerio, node-fetch, youtube-sr

File Processing Capabilities

The application extracts text content from attached files and includes it in the conversation context, enabling the AI to process and respond based on the document's information.

Format Processing Method
Images (JPEG, PNG) OCR text extraction via Tesseract.js (English language)
PDF Multi-page text extraction via PDF.js with page-by-page output
Word (DOCX) Raw text extraction via Mammoth
Excel (XLSX) Row-column parsing with pipe-delimited output via read-excel-file
Plaintext / Code Direct file read (TXT, MD, JSON, CSV, JS, TS, PY, etc.)

Web Search Integration (Deep Research)

The web search functionality in Lorph is engineered with a multi-layered backend architecture to ensure efficient and accurate information retrieval:

  1. Intent Parsing: The user's query is analyzed by the LLM to generate 5-7 highly specific, expert-level search queries.
  2. Parallel Acquisition: Executes concurrent searches across DuckDuckGo Lite and YouTube APIs to gather initial URLs and metadata.
  3. Deep Scraping: Fetches the raw HTML of the discovered URLs. It utilizes proxy rotation (e.g., corsproxy.io, allorigins) to bypass access restrictions.
  4. Knowledge Extraction: Uses Cheerio to parse the DOM, removing noise (ads, navbars) and extracting core text, OpenGraph images, and embedded videos.
  5. Synthesis: The compiled context is sent to the LLM with strict instructions to generate a comprehensive response, complete with inline citations and rich media formatting.

Contributing

Contributions to the Lorph project are welcome. For suggestions, feature enhancements, or bug fixes, please adhere to the following workflow:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature/YourFeature).
  3. Implement your changes and commit them (git commit -m 'Add some feature').
  4. Push your branch to your fork (git push origin feature/YourFeature).
  5. Open a Pull Request to the main repository.

License

This project is distributed under the MIT License. Refer to the LICENSE file for complete details.