Tlaloc Chatbot UI is a modular, frontend-only chatbot interface built with plain HTML, CSS, and JavaScript. It was designed as a modern local chat client that can switch between multiple visual styles while connecting directly to a locally running Ollama instance.
The project focuses on:
- A clean and extendable component structure
- Multiple selectable chatbot themes
- Responsive layout for desktop and mobile
- Markdown-style assistant responses with code blocks
- Copy actions for both full messages and generated code snippets
- Local model inference through Ollama without adding a backend
- Chat history with user and assistant message bubbles
- Assistant avatar and per-message timestamps
- Typing indicator animation
- Auto-scroll to the newest message
- Dynamic theme switching with CSS variables
- Model selector populated from local Ollama models
- Connection status badge for the local Ollama server
- Code block rendering with
Copy codesupport - Full message copy button
The UI currently supports six themes:
- Classic Chat
- Minimal AI
- Cyberpunk Neon
- Terminal Hacker
- Glassmorphism
- Modern Dashboard
Theme styles are defined in themes.css, while shared layout and component styling lives in styles.css.
TLALOC-CHATBOT/
|-- index.html
|-- styles.css
|-- themes.css
|-- app.js
|-- README.md
`-- img/
|-- ClassicChat.png
|-- DifferentDesigns.png
|-- Glassmorphism.png
|-- HackerDesign.png
`-- TherminalHacker.png
The app loads in the browser as a static frontend. On startup it tries to reach the local Ollama API at:
http://127.0.0.1:11434
It uses:
GET /api/tagsto discover available local modelsPOST /api/chatto send the conversation history and receive a model response
This means the project stays backend-free while still supporting real local AI chat.
Because browsers often restrict API calls from file:// pages, run the app through a local static server.
cd D:\extrastudy\TLALOC-CHATBOT
python -m http.server 5500Then open:
http://localhost:5500
You can also use VS Code Live Server, npx serve, or any equivalent local static server.
Make sure Ollama is running locally before opening the app.
Useful commands:
ollama --version
ollama ls
ollama run gemma3:1bThe app automatically detects installed models and lets you select one from the header.
The repository includes screenshots in the img folder.
index.html: Main application structure and layoutstyles.css: Shared component styling and layout rulesthemes.css: Theme variable overrides and theme-specific presentationapp.js: Chat behavior, markdown rendering, Ollama API integration, copy actions, and UI state
This project is intentionally easy to extend. Good next steps include:
- Streaming responses from Ollama
- Chat persistence with
localStorage - Conversation switching in the dashboard sidebar
- System prompt controls
- Syntax highlighting for code blocks
- Export chat history
- No backend is required for the current version
- Ollama must be running locally for live responses
- If the Ollama connection fails, the UI shows an inline help message




