A Streamlit-powered chatbot application that supports both OpenAI GPT and Ollama models, with integrated Arxiv paper retrieval as a tool.
- Conversational AI: Chat with an assistant powered by either OpenAI GPT-4o or Ollama (Llama3.2).
- Arxiv Paper Retrieval: Use the
retrievetool to fetch and summarize academic papers from Arxiv. - Streaming Responses: AI responses are streamed for a smooth chat experience.
- Model Switching: Subtitle displays which model is currently active.
- LangSmith Tracing: Optional tracing for debugging and analytics.
1.Clone the repository:
git clone https://github.com/wangychn/streamlit-chatbot.git
cd streamlit-chatbot2.Install dependencies:
pip install -r requirements.txt3.Set environment variables:
- Create a
.envfile in the root directory:
OPENAI_API_KEY=your-openai-key
LANGCHAIN_API_KEY=your-langsmith-key- (Optional) Set additional LangSmith tracing variables if needed.
4.Start Ollama (if using Ollama):
- Make sure Ollama is running locally and the desired model (e.g.,
llama3.2) is available.
Run the Streamlit app:
streamlit run src/streamlit_app.py- The app will display a chat interface.
- The subtitle will show which model is active (GPT-4o or Ollama).
- Type your questions in the chat input.
- To retrieve papers, mention your need to investigate papers in your message.
src/
chatbot.py # Chatbot logic and graph construction
streamlit_app.py # Streamlit UI and app entry point- Switch models: Change the
llminitialization instreamlit_app.pyto use eitherChatOpenAIorChatOllama. - Add tools: Extend
chatbot.pyto add more tools or retrieval functions.
- Session state errors: Always run the app with
streamlit run, not plainpython. - Model errors: Ensure your API keys and Ollama server are set up correctly.
- Streaming issues: Input is disabled while the AI is generating a response to prevent interruptions.
