| title | emoji | colorFrom | colorTo | sdk | app_port |
|---|---|---|---|---|---|
LLM Web Scraper |
🕸️ |
blue |
green |
docker |
7860 |
Scrape any web page, ask questions, and get structured answers powered by LangChain, FireCrawl/Crawl4AI and leading LLMs from NVIDIA and Google—all wrapped in a clean Gradio interface.
🔗 Live Demo: https://huggingface.co/spaces/frkhan/llm-web-scrapper
🔗 Read Full Story: From Broken Selectors to Intelligent Scraping: A Journey into LLM-Powered Web Automation
- 🕸️ Multi-Backend Scraping: Choose between
FireCrawlfor robust, API-driven scraping andCrawl4AIfor local, Playwright-based scraping. - 🧠 Intelligent Extraction: Use powerful LLMs (NVIDIA or Google Gemini) to understand your query and extract specific information from scraped content.
- 📊 Structured Output: Get answers in markdown tables, JSON, or plain text, as requested.
- 📈 Full Observability: Integrated with
Langfuseto trace both scraping and LLM-extraction steps. - ✨ Interactive UI: A clean and simple interface built with
Gradio. - 🐳 Docker-Ready: Comes with
Dockerfileanddocker-composeconfigurations for easy local and production deployment.
| Component | Purpose |
|---|---|
| LangChain | Orchestration of LLM calls |
| FireCrawl / Crawl4AI | Web scraping backends |
| NVIDIA / Gemini | LLM APIs for information extraction |
| Langfuse | Tracing and observability for all operations |
| Gradio | Interactive web UI |
| Docker | Containerized deployment |
| Playwright | Web scraping using Crawl4AI |
This tool is provided for educational and experimental purposes only. Users are solely responsible for their actions and must comply with the terms of service of any website they scrape. Always respect
robots.txtand be mindful of the website's policies. The developers of this tool are not liable for any misuse.
-
Clone the repository:
git clone https://github.com/KI-IAN/llm-web-scrapper.git cd llm-web-scrapper -
Install dependencies:
pip install -r requirements.txt
-
Install Playwright browsers (for Crawl4AI):
playwright install
-
Create a
.envfile in the root directory with your API keys:GOOGLE_API_KEY=your_google_api_key NVIDIA_API_KEY=your_nvidia_api_key FIRECRAWL_API_KEY=your_firecrawl_api_key # Optional: For Langfuse tracing LANGFUSE_PUBLIC_KEY=pk-lf-... LANGFUSE_SECRET_KEY=sk-lf-... LANGFUSE_HOST=https://cloud.langfuse.com
-
Run the application:
python app.py
-
For Production: This uses the standard
docker-compose.yml.docker compose up --build
-
For Local Development (with live code reload): This uses
docker-compose.dev.ymlto mount your local code into the container.docker compose -f docker-compose.dev.yml up --build
Access the app at http://localhost:12200.
To use this app, you'll need API keys for Google Gemini, NVIDIA NIM, and FireCrawl. For full observability, you'll also need keys for Langfuse.
Gemini is Google's family of generative AI models. To get an API key:
- Visit the Google AI Studio.
- Sign in with your Google account.
- Click "Create API Key" and copy the key shown.
- Use this key in your
.envfile or configuration asGEMINI_API_KEY.
Note: Gemini API access may be limited based on region or account eligibility. Check the Gemini API Rate Limits here
NIM (NVIDIA Inference Microservices) provides hosted models via REST APIs. To get started:
- Go to the NVIDIA API Catalog.
- Choose a model (e.g.,
nim-gemma,nim-mistral, etc.) and click "Get API Key". - Sign in or create an NVIDIA account if prompted.
- Copy your key and use it as
NVIDIA_NIM_API_KEYin your environment.
Tip: You can test NIM endpoints directly in the browser before integrating.
- Sign up at FireCrawl.
- Find your API key in the dashboard.
- Sign up or log in at Langfuse Cloud.
- Navigate to your project settings and then to the "API Keys" tab.
- Create a new key pair to get your
LANGFUSE_PUBLIC_KEY(starts withpk-lf-...) andLANGFUSE_SECRET_KEY(starts withsk-lf-...). - Add these to your
.envfile to enable tracing.
- Enter a URL: Provide the URL of the web page you want to analyze.
- Define Your Query: Specify what you want to extract (e.g., "product name, price, and rating" or "summarize this article").
- Scrape the Web Page: Choose a scraper (
Crawl4AIorFireCrawl) and click "Scrape Website". - Select Model & Provider: Choose an LLM to process the scraped content.
- Extract Info: Click "Extract Info by LLM" to get a structured answer.
llm-web-scrapper/
├── .env # Local environment variables (not tracked by git)
├── .github/ # GitHub Actions workflows
├── .gitignore
├── docker-compose.yml # Production Docker configuration
├── docker-compose.dev.yml# Development Docker configuration
├── Dockerfile
├── requirements.txt
├── app.py # Gradio UI and application logic
├── config.py # Environment variable loading
├── crawl4ai_client.py # Client for Crawl4AI scraper
├── firecrawl_client.py # Client for FireCrawl scraper
└── llm_inference_service.py # Logic for LLM calls
This project is open-source and distributed under the MIT License. Feel free to use, modify, and distribute it.
- LangChain for orchestrating LLM interactions.
- FireCrawl & Crawl4AI for providing powerful scraping backends.
- NVIDIA AI Endpoints & Google Gemini API for their state-of-the-art LLMs.
- Langfuse for providing excellent observability tools.
- Gradio for making UI creation simple and elegant.
- Docker for containerization
- Playwright for web scraping using Crawl4AI