Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 38 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,19 @@ A Chromium-based browser with integrated local LLM capabilities for intelligent
- 🛠️ Developer tools integration (F12)
- 📄 Page printing and source viewing
- 🔎 Zoom controls (Ctrl +/-/0)
- 🤖 Ollama/LLM integration with streaming inference
- 💬 Chat sidebar for AI conversations with model capability detection
- ⚡ Comprehensive model manager with download progress tracking
- 🎯 Vision-capable and text-only model support
- 🚀 Automatic GPU acceleration (CUDA, ROCm, Metal)
- ⭐ Default model selection and persistent settings

### Planned Features
- 🤖 Local multi-modal vision LLMs (no cloud dependency)
- 💬 Chat interface for page analysis and interaction
- 📥 Model management with downloads from Hugging Face
- 🔒 Privacy-first AI inference (all processing happens locally)
- ⚡ Powered by Ollama for efficient inference
- 🖼️ Vision model integration for screenshot analysis
- 📊 AI-powered page summarization and content extraction
- 📥 Model management UI with progress tracking
- 🏷️ Smart bookmarking with AI categorization
- 🔍 Semantic search across browsing history

## Tech Stack

Expand All @@ -32,24 +38,31 @@ A Chromium-based browser with integrated local LLM capabilities for intelligent
- **Tailwind CSS** - Utility-first styling
- **Zustand** - Lightweight state management
- **Better-SQLite3** - Local database for history and bookmarks
- **Axios** - HTTP client for Ollama API communication
- **ESLint + Prettier** - Code quality and formatting
- **Husky** - Git hooks for pre-commit checks
- **Ollama** - Local LLM inference engine (planned integration)
- **Ollama** - Local LLM inference engine

## Development

### Prerequisites

- Node.js 18+ (LTS recommended)
- npm or pnpm
- Ollama installed ([ollama.com](https://ollama.com))
- Ollama installed ([ollama.com](https://ollama.com)) - Required for AI features

### Getting Started

```bash
# Install dependencies
npm install

# Start Ollama (required for AI features)
ollama serve

# Pull a model (optional, for testing AI features)
ollama pull llama2

# Start development server
npm run dev

Expand All @@ -67,11 +80,11 @@ open-browser/
├── src/
│ ├── main/ # Electron main process
│ │ ├── ipc/ # IPC handlers for renderer communication
│ │ ├── services/ # Database and backend services
│ │ ├── services/ # Backend services (database, ollama)
│ │ └── utils/ # Validation and utilities
│ ├── renderer/ # React UI
│ │ ├── components/ # React components (Browser, Chat, etc.)
│ │ ├── store/ # Zustand state management
│ │ ├── store/ # Zustand state management (browser, chat, models)
│ │ └── services/ # Frontend services
│ └── shared/ # Shared types and utilities
├── .github/ # GitHub configuration and workflows
Expand Down Expand Up @@ -103,14 +116,24 @@ See [TECH_BRIEFING.md](./TECH_BRIEFING.md) for comprehensive technical documenta
- [x] Context menus and keyboard shortcuts
- [x] Code quality tooling (ESLint, Prettier, Husky)
- [x] CI/CD with GitHub Actions
- [x] Ollama service integration with auto-start capability
- [x] Chat interface with streaming message support
- [x] Comprehensive model manager UI with tabs
- [x] Model registry with 12+ pre-configured models
- [x] Vision vs text-only model capability tracking
- [x] Download progress tracking with real-time updates
- [x] Default model selection with persistent storage
- [x] Model metadata display (size, parameters, capabilities)
- [x] GPU acceleration support (automatic detection)
- [x] IPC handlers for secure LLM operations
- [x] Chat and Model state management with Zustand

### In Progress / Planned
- [ ] Ollama integration for local LLM inference
- [ ] Chat interface for page interaction
- [ ] Model management system
- [ ] Vision model integration for screenshot analysis
- [ ] AI-powered page summarization
- [ ] Vision model integration for screenshot and page analysis
- [ ] Content capture service for page context extraction
- [ ] AI-powered page summarization with readability
- [ ] Smart bookmarking with AI categorization
- [ ] Model registry with pre-configured models

## Keyboard Shortcuts

Expand All @@ -123,6 +146,7 @@ See [TECH_BRIEFING.md](./TECH_BRIEFING.md) for comprehensive technical documenta
| `Ctrl/Cmd + R` or `F5` | Reload page |
| `Ctrl/Cmd + H` | Toggle history sidebar |
| `Ctrl/Cmd + B` | Toggle bookmarks sidebar |
| `Ctrl/Cmd + M` | Open model manager |
| `Alt + Left` | Go back |
| `Alt + Right` | Go forward |
| `Ctrl/Cmd + Plus` | Zoom in |
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@
"wait-on": "^9.0.1"
},
"dependencies": {
"axios": "^1.7.0",
"better-sqlite3": "^12.4.1",
"react": "^19.2.0",
"react-dom": "^19.2.0",
Expand Down
134 changes: 134 additions & 0 deletions src/main/ipc/handlers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ import {
validateString,
validateBoolean,
} from '../utils/validation';
import { ollamaService } from '../services/ollama';
import type { GenerateOptions, ChatOptions } from '../../shared/types';

export function registerIpcHandlers() {
console.log('registerIpcHandlers called');
Expand Down Expand Up @@ -248,4 +250,136 @@ export function registerIpcHandlers() {
throw error;
}
});

// Ollama/LLM handlers
ipcMain.handle('ollama:isRunning', async () => {
try {
return await ollamaService.isRunning();
} catch (error: any) {
console.error('ollama:isRunning error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:start', async () => {
try {
await ollamaService.start();
return { success: true };
} catch (error: any) {
console.error('ollama:start error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:listModels', async () => {
try {
return await ollamaService.listModels();
} catch (error: any) {
console.error('ollama:listModels error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:pullModel', async (event, modelName: string) => {
try {
validateString(modelName, 'Model name', 256);

// Stream progress updates back to renderer
const generator = ollamaService.pullModel(modelName);

for await (const progress of generator) {
event.sender.send('ollama:pullProgress', progress);
}

return { success: true };
} catch (error: any) {
console.error('ollama:pullModel error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:deleteModel', async (event, modelName: string) => {
try {
validateString(modelName, 'Model name', 256);
await ollamaService.deleteModel(modelName);
return { success: true };
} catch (error: any) {
console.error('ollama:deleteModel error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:generate', async (event, options: GenerateOptions) => {
try {
if (!options || typeof options !== 'object') {
throw new Error('Invalid generate options');
}

validateString(options.model, 'Model name', 256);
validateString(options.prompt, 'Prompt', 50000);

if (options.system) {
validateString(options.system, 'System prompt', 10000);
}

// Stream response tokens back to renderer
const generator = ollamaService.generate({
model: options.model,
prompt: options.prompt,
images: options.images,
system: options.system,
stream: true,
});

for await (const token of generator) {
event.sender.send('ollama:generateToken', token);
}

return { success: true };
} catch (error: any) {
console.error('ollama:generate error:', error.message);
throw error;
}
});

ipcMain.handle('ollama:chat', async (event, options: ChatOptions) => {
try {
if (!options || typeof options !== 'object') {
throw new Error('Invalid chat options');
}

validateString(options.model, 'Model name', 256);

if (!Array.isArray(options.messages)) {
throw new Error('Messages must be an array');
}

// Validate messages
for (const msg of options.messages) {
if (!msg || typeof msg !== 'object') {
throw new Error('Invalid message object');
}
validateString(msg.content, 'Message content', 50000);
if (!['system', 'user', 'assistant'].includes(msg.role)) {
throw new Error('Invalid message role');
}
}

// Stream response tokens back to renderer
const generator = ollamaService.chat({
model: options.model,
messages: options.messages,
stream: true,
});

for await (const token of generator) {
event.sender.send('ollama:chatToken', token);
}

return { success: true };
} catch (error: any) {
console.error('ollama:chat error:', error.message);
throw error;
}
});
}
14 changes: 13 additions & 1 deletion src/main/preload.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,21 @@ const ALLOWED_INVOKE_CHANNELS = [
'webview:openDevTools',
'webview:print',
'webview:viewSource',
'ollama:isRunning',
'ollama:start',
'ollama:listModels',
'ollama:pullModel',
'ollama:deleteModel',
'ollama:generate',
'ollama:chat',
];

const ALLOWED_LISTEN_CHANNELS = ['open-view-source'];
const ALLOWED_LISTEN_CHANNELS = [
'open-view-source',
'ollama:pullProgress',
'ollama:generateToken',
'ollama:chatToken',
];

// Expose protected methods that allow the renderer process to use
// the ipcRenderer without exposing the entire object
Expand Down
Loading