Skip to content

Jabba/ollama-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ollama Chat

A minimal, local-first desktop chat interface for Ollama models — built with Electron. All inference runs on your machine with no data leaving your device.

Electron Node License


Features

  • Streams responses in real time with a live typing cursor
  • Auto-discovers all locally installed Ollama models
  • Full multi-turn conversation memory
  • Markdown rendering — code blocks, tables, bold, lists, and more
  • Copy button on every assistant message
  • New Chat / Clear conversation
  • Connection status indicator
  • Dark theme

Prerequisites

You need three things installed before running the app.

1. Node.js (v18 or newer)

Download from nodejs.org or install via your package manager:

# Ubuntu / Debian
sudo apt install nodejs npm

# macOS (Homebrew)
brew install node

# Check your version
node --version

2. Ollama

Download from ollama.com or install via the official script:

curl -fsSL https://ollama.com/install.sh | sh

3. At least one Ollama model

Pull a model before launching the app. Some good options depending on your hardware:

# Fast and lightweight (~2 GB)
ollama pull llama3.2

# Strong general-purpose model (~5 GB)
ollama pull llama3.1:8b

# High quality, needs ~16 GB RAM
ollama pull mistral:7b

On a machine with 32 GB RAM you can comfortably run 7B–13B parameter models. Run ollama list to see what you have installed.


Installation

# 1. Clone the repository
git clone https://github.com/your-username/ollama-chat.git
cd ollama-chat

# 2. Install dependencies
npm install

Running the App

Step 1 — Verify Ollama is running

On most systems Ollama starts automatically as a background service after installation. Verify it is up:

curl http://localhost:11434
# Expected response: Ollama is running

If you see address already in use when trying to run ollama serve, that is fine — the service is already running and you can skip to Step 2.

If Ollama is not running, start it manually:

ollama serve

Step 2 — Check you have at least one model installed

ollama list

If the list is empty, pull a model before launching the app:

# Fast and lightweight (~2 GB) — good starting point
ollama pull llama3.2

# Better reasoning, still fast (~5 GB)
ollama pull llama3.1:8b

Wait for the download to complete. You can pull as many models as you like and switch between them inside the app.

Step 3 — Launch the app

npm start

The app will connect to Ollama at http://localhost:11434 and populate the model selector with everything returned by ollama list.


Development Mode

Opens the app with DevTools enabled:

npm run dev

Project Structure

ollama-chat/
├── main.js          # Electron main process — window creation, app lifecycle
├── preload.js       # Context bridge — exposes markdown parser to renderer
├── package.json
└── src/
    ├── index.html   # UI layout
    ├── styles.css   # Dark theme styles
    └── renderer.js  # Ollama API calls, streaming, UI logic

Troubleshooting

"Ollama offline" in the status bar Make sure ollama serve is running. Restart it and reload the app (Ctrl+R).

"No models installed" Pull at least one model: ollama pull llama3.2

Blank window or crash on startup Run npm run dev to open DevTools and check the console for errors.

Slow responses Response speed depends entirely on your hardware and the model size. Smaller models (3B–7B) will be significantly faster on CPU.

About

An electron app interfacing with Ollama models for local chat

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors