Skip to content

feat: Ollama + Gemma 3 4B config and Windows desktop launcher#249

Open
Methu20-BIM wants to merge 3 commits intoopen-jarvis:mainfrom
Methu20-BIM:feat/ollama-gemma3-windows-setup
Open

feat: Ollama + Gemma 3 4B config and Windows desktop launcher#249
Methu20-BIM wants to merge 3 commits intoopen-jarvis:mainfrom
Methu20-BIM:feat/ollama-gemma3-windows-setup

Conversation

@Methu20-BIM
Copy link
Copy Markdown

Summary

  • configs/openjarvis/config.toml — switches the default engine from vllm (requires A100 GPUs) to ollama and the default model from GLM-4.7-Flash to gemma3:4b, making OpenJarvis usable on a standard consumer Windows PC with no cloud API keys
  • jarvis.bat — Windows batch launcher that activates the project virtual environment and prompts the user for a question; intended to be pinned to the desktop so non-technical users can interact with Jarvis without opening a terminal

Motivation

The existing config.toml is tuned for an 8×A100-80 GB eval cluster. For developers who want to run OpenJarvis locally on a Windows machine with Ollama, there was no ready-made example config. This PR provides that, along with a zero-friction desktop shortcut script.

Test plan

  • ollama pull gemma3:4b + ollama serve
  • uv sync (Python 3.12, pinned via .python-version)
  • uv run jarvis ask "What is the capital of France?" → returns correct answer
  • Double-click jarvis.bat on Windows desktop → prompts for question, returns answer

🤖 Generated with Claude Code

Methu20-BIM and others added 3 commits April 14, 2026 23:19
Switch default inference config from vLLM/GLM-4.7-Flash (A100-optimised)
to Ollama with gemma3:4b so the project runs out-of-the-box on a
consumer Windows machine without cloud APIs or specialist hardware.

- configs/openjarvis/config.toml: set engine to ollama, model to
  gemma3:4b, temperature 0.7 (conversational default)
- jarvis.bat: Windows batch launcher — activates the project venv and
  prompts the user for a question, making Jarvis accessible from the
  desktop without opening a terminal manually

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three issues fixed for Windows desktop users:

- Set chcp 65001 + PYTHONUTF8=1 so emoji/unicode in model responses
  no longer crash the terminal (cp1252 UnicodeEncodeError)
- Use full path to .venv\Scripts\jarvis.exe instead of relying on
  venv activation via call, which silently failed in some cmd contexts
- Add a :loop so users can ask multiple questions in one session without
  reopening the window; type "avslutt" to exit

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds a full-featured browser-based chat UI for Jarvis, mirroring the
core UX of Claude Sonnet, running entirely locally via Ollama.

dashboard/index.html:
- Sidebar with model selector (gemma3:4b / gemma4), new-chat, clear,
  and live Ollama status indicator
- Markdown rendering via marked.js (headers, lists, tables, bold, etc.)
- Syntax-highlighted code blocks via highlight.js with one-click copy
- Artifact panel: HTML code blocks open as live previews in an iframe
- Copy-message button on every turn (appears on hover)
- Image upload via file picker, drag-and-drop, or Ctrl+V clipboard paste
- Suggestion chips on the welcome screen
- Streaming responses with animated typing indicator

start_dashboard.bat:
- Launches Python's built-in HTTP server on port 8765 and opens the
  browser automatically, so users can start the dashboard with one click

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant