ORAND Praxis is a single-file HTML application that enables branching conversations with Large Language Models (LLMs). Unlike traditional linear chat interfaces, this tool allows you to explore multiple conversation paths, fork responses, and manage different branches of dialogue.
- π³ Branching Conversations: Create multiple conversation branches from any point
- π Fork Management: Split conversations into multiple parallel branches to explore different responses
- π― Exact Context Preservation: Branches inherit complete message history with zero information loss (v1.5)
- β¨ Selective Message Commit: Cherry-pick which messages to merge from branches to trunk with interactive UI (v1.6)
- οΏ½ RAG Document Integration: Upload documents (PDF, DOCX, TXT, MD, JSON, CSV) for document-grounded conversations (v2.0)
- οΏ½π Context Size Indicator: Real-time message count with color-coded efficiency metrics (v1.5)
- π Full-Text Search: Search across all conversations, branches, and messages with instant results
- π Audit Trail: Soft-delete system maintains conversation history with read-only visibility for all resolved branches: discarded, promoted, and split (v1.4, v1.6.2, v1.6.3, v1.7)
- πΎ Local Storage: All data stored locally in IndexedDB - no server required
- π€ Markdown Export: Export entire conversation trees to Markdown format
- π LM Studio Integration: Connect to local LM Studio instance for LLM inference
- π Visual Tree View: Intuitive visual representation of all conversation branches including discarded ones
- π¨ Color-Coded Branches: Easy-to-distinguish branch colors for better organization
- β‘ Single File: Entire application in one HTML file - no dependencies or build process
- A modern web browser (Chrome, Firefox, Edge, or Safari)
- LM Studio (optional, for local LLM inference)
- Download
orand_praxis_v2.0.html(or latest version) - Open the file in your web browser
- That's it! The app is ready to use.
- Install and launch LM Studio
- Load a model in LM Studio
- Enable the local server in LM Studio (typically runs on
http://localhost:1234) - In the Conversation Brancher, the app will automatically connect to LM Studio
- Select your model from the dropdown and start chatting
- LM Studio URL: Default is
http://localhost:1234(configurable) - Max Tokens: Maximum response length (default: 2048)
- Temperature: Controls randomness in responses (0.0 - 2.0)
- Top P: Nucleus sampling parameter
- Type your message in the input box at the bottom
- Press
Ctrl+Enteror click the Send button - Wait for the LLM to respond
- Add Fork: Click the "Add Fork" button to create multiple branches from the last assistant response
- Edit Prompts: Modify the prompt variations for each branch
- Launch Branches: Submit to generate responses for all branches simultaneously
- Navigate: Use the branch selector to switch between different conversation paths
- Commit: Merge a branch into the trunk (main conversation)
- Discard: Soft-delete a branch (can be viewed in audit log)
- Promote: Create a new conversation with full original context plus this branch's direction (v1.7)
- Split: Create a new independent conversation tree from the fork point only
When committing a branch, you can now choose exactly which messages to merge into the main conversation:
- Click "Commit" on any branch
- Review all branch messages in an interactive modal
- Select/deselect messages using checkboxes (all selected by default)
- Use "Select All" to quickly toggle all messages
- Click "Commit Selected" to inject chosen messages into trunk
Benefits:
- Cherry-pick insights: Keep only the valuable parts of an exploration
- Filter hallucinations: Exclude incorrect or off-track responses
- Narrative control: Maintain a coherent main conversation thread
- Full transparency: See exactly what you're merging before committing
The modal shows each message with its role (user/assistant), content preview, and selection state. All selections are recorded in the action log for complete audit trail.
Exact Context Preservation: When you create branches, ORAND Praxis now preserves the complete conversation history with zero information loss. Unlike previous versions that used LLM-generated summaries, v1.5 stores exact copies of all messages, ensuring:
- Perfect semantic fidelity - no summarization artifacts
- Faster branch creation - eliminates extra LLM calls
- Better conversation continuity across branches
- More predictable LLM behavior
Context Size Indicator: The input area displays a real-time context counter showing how many messages are in the current conversation context. The indicator uses color coding for efficiency guidance:
- π’ Green (< 20 messages): Optimal context size
- π‘ Yellow (20-39 messages): Good context size
- π Orange (40-59 messages): Large context - consider forking
- π΄ Red (β₯ 60 messages): Very large context - forking recommended
For branches, hover over the counter to see a comparison with the parent branch's context size.
RAG (Retrieval-Augmented Generation) allows you to ground LLM responses in your uploaded documents.
- Select a conversation or create a new one
- In the sidebar, find the π Documents (RAG) panel
- Click "+ Upload Document" button
- Select a file (PDF, DOCX, TXT, MD, JSON, or CSV)
- Document is parsed and stored with the conversation
- Once documents are uploaded, the π’ RAG Active badge appears in the chat header
- All messages sent in that conversation will have access to the document content
- The LLM receives documents as part of its system prompt with instructions to prioritize document information
- Ask questions about the documents: "What is the main conclusion?", "Summarize section 3", etc.
RAG works seamlessly with all branching features:
- Fork with documents: Create branches to explore different questions on the same documents
- Compare approaches: Branch to compare RAG vs non-RAG responses (upload docs to one branch only)
- Commit insights: Merge valuable document-based insights back to trunk
- Promote/Split: Documents stay linked to conversation - promoted/split branches inherit document context
- View uploaded documents: See list in sidebar with file names
- Remove individual document: Click Γ button next to file name
- Clear all documents: Click "Clear All Documents" button to remove all files from conversation
- Per-conversation storage: Documents are linked to specific conversations, not global
- PDF: Full text extraction from all pages (via PDF.js)
- DOCX: Microsoft Word document text extraction (via Mammoth.js)
- TXT: Plain text files
- MD: Markdown files
- JSON: JSON data files
- CSV: Comma-separated value files
- Maximum file size: 10MB (raw upload)
- Maximum extracted text: 5MB (after parsing)
- These limits prevent browser storage quota issues and ensure optimal LLM performance
- Large documents should be split into smaller, focused files for better RAG results
Note: RAG context is injected automatically - you don't need to reference documents explicitly. Just ask questions naturally.
Click the "Export as Markdown" button to download your entire conversation tree as a .md file. The export includes:
- All conversation branches
- Timestamps and metadata
- Hierarchical structure preserved
Use the search bar at the top of the sidebar to find content across all conversations:
- Type at least 2 characters to see results
- Searches conversation titles, branch labels, and message content
- Results display contextual snippets with highlighted matches
- Click any result to jump directly to that conversation/branch/message
- Results are sorted by relevance (titles first, then by date)
- Limited to 50 most relevant results
All conversation data is stored locally in your browser's IndexedDB. This means:
- β Complete privacy - no data sent to external servers (except LM Studio if configured)
- β Works offline (after initial page load)
- β Persistent across browser sessions
β οΈ Data is browser-specific (not synced across devices)β οΈ Clearing browser data will delete conversations
The application is structured in clear sections:
- META: App metadata and version info
- CONFIG: Configuration constants
- STYLE: CSS styling with design tokens
- STATE: Application state management
- UI: User interface rendering functions
- LOGIC: Core functionality modules
db: IndexedDB operations (6 stores: conversations, nodes, messages, snapshots, actionLog, documents)actionlog: Audit trail managementlmstudio: LM Studio API integrationtree: Conversation tree managementbranch: Branch operationsexport: Markdown export functionalitydocuments: RAG document processing (PDF.js, Mammoth.js)
- EVENTS: Event handlers
- INIT: Application initialization
- USERGUIDE: Built-in user documentation
- Chrome/Edge (v90+): β Full support
- Firefox (v88+): β Full support
- Safari (v14+): β Full support
- Opera (v76+): β Full support
- Requires JavaScript enabled
- IndexedDB must be available and enabled
- LM Studio connection requires CORS to be properly configured
- Large conversation trees may impact performance
- All data stored locally in browser
- No analytics or tracking
- No external API calls (except to your local LM Studio instance)
- Open source - inspect the code yourself
Contributions are welcome! Please read CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- Built for use with LM Studio
- Inspired by the need for non-linear conversation exploration with LLMs
- v2.0.0 (2026-03-24): RAG document integration - upload PDF, DOCX, TXT, MD, JSON, CSV for document-grounded conversations
- v1.7.1 (2026-03-24): Read-only UI consistency for all resolved branches
- v1.7.0 (2026-03-24): Promote creates new conversations with full context
- v1.6.3 (2026-03-24): Split branch visibility with audit trail
- v1.6.2 (2026-03-24): Promoted branch visibility
- v1.6.0 (2026-03-24): Selective message commit with interactive UI
- v1.5.0 (2026-03-23): Exact context preservation, context size indicator
- v1.2 (2026-03-22): Full branching and export capabilities
For issues, questions, or suggestions, please open an issue on GitHub.
Made with β€οΈ for better LLM conversations