This project is a web-based application that allows users to interact with locally installed Ollama models and also use LangChain to interface with Ollama llms. The application consists of a frontend built with Vue.js and a backend built with Flask. The backend interacts with AI models and provides responses to user queries.
- Docker Desktop
- Ollama installed locally
- LLM models available in Ollama
git clone <repository-url>
cd <repository-directory>docker-compose up --buildThis command will build and start the following services:
ollama: The backend service for handling AI queries.frontend: The frontend service for the web application.
Open your web browser and navigate to http://localhost:3000 to access the Ollama AI Assistant.
- Select an AI model from the dropdown menu.
- Choose the query type (
LangChainorDirect LLM). - Type your message in the input box and press
Send. - The AI response will be displayed in the chat window.
The project includes Dockerfiles for both the frontend and backend services, as well as a docker-compose.yml file to orchestrate the services.
The frontend/Dockerfile builds the Vue.js application and serves it using Nginx.
The ollama/Dockerfile sets up the Flask application and installs the necessary dependencies.
The docker-compose.yml file defines the services and their configurations, including ports, volumes, and environment variables.
This project is licensed under the MIT License. See the LICENSE file for more details.