An interactive, AI-driven research assistant that researches, filters, and summarizes topics.
โค The assistant uses a local AI model to generate search queries, browse websites, and extract relevant content. You interactively select sources and output format before the AI creates a live summary.
โ Interactive Control: Source & format selection.
๐ง AI-Driven: Research & summarization by LLM.
๐พ Intelligent Caching: Accelerates repeated searches.
โก Live Summary: Real-time output in the terminal.
๐ Progress Indicators: Visual feedback during long operations.
๐ก Robust Ollama Communication: Automatic retries.
๐ Strict Content Filtering: By domain, language, length.
๐ง Easy Configuration & Validation: config.json is checked.
๐ฆ Fully Automatic Setup: Dependencies & Ollama models.
๐ 100% Local: Full data control.
๐ Python 3.x ๐ณ Ollama: A running local Ollama server.
- Download Files:
scraper.py,requirements.txt,config.jsonin the same directory. - Run Script:
python scraper.py
- First Start:
config.jsonis created, Python packages installed, Ollama model checked/offered. - Interactive Process: Enter topic, select sources, choose format, observe summary.
The final result is saved in output.txt.
Adjust the assistant's behavior via the config.json file. Important settings include Ollama details, search parameters, filters, and caching options.