| title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | tags | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ToolCraft Agent |
⚡ |
pink |
yellow |
gradio |
5.23.1 |
app.py |
false |
|
An intelligent AI agent that writes and executes Python code to solve complex tasks using web search, image generation, and custom tools.
- 🧠 Code Generation: AI agent writes and executes Python code dynamically to solve tasks
- 🌐 Web Search: Integrated DuckDuckGo search for real-time information retrieval
- 📄 Web Scraping: Visit and extract content from webpages with markdown conversion
- 🎨 Image Generation: Generate images from text prompts using Hugging Face's image models
- 💬 Conversational Memory: Maintains context and variables across multiple messages
- 🔄 Multi-step Reasoning: Breaks down complex tasks into sequential steps
- 📊 Execution Logs: Transparent view of code execution and tool outputs
- 🛡️ Error Handling: Graceful error recovery and informative error messages
Try it out: https://huggingface.co/spaces/smartha2003/toolcraft-agent
User Query
↓
LLM (Qwen2.5-Coder) generates reasoning
↓
LLM generates Python code with tool calls
↓
Code executes → Tools called → Results observed
↓
LLM processes results → Next step or final_answer()
↓
Result displayed in chat interface
- CodeAgent: Core agent that orchestrates code generation and execution
- Tools: Modular Python functions (web_search, visit_webpage, image_generator)
- Gradio UI: Interactive chat interface with streaming responses
- Memory System: Persistent state across conversation turns
- Framework: smolagents by Hugging Face
- LLM: Qwen/Qwen2.5-Coder-32B-Instruct
- UI: Gradio
- Tools: DuckDuckGo Search, Requests, PIL/Pillow
- Deployment: Hugging Face Spaces
- Python 3.10+
- Hugging Face account with API token
-
Clone the repository
git clone https://github.com/smartha2003/toolcraft.git cd toolcraft -
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables Create a
.envfile:HF_TOKEN=your_huggingface_token_here
-
Run the application
python app.py
The app will launch at http://localhost:7860
User: "Generate an image of a black cat with green eyes"
Agent: [Generates and displays image]
User: "What is the population of Tokyo?"
Agent: [Searches web, extracts information, provides answer]
User: "Find the current time in San Francisco and convert it to UTC"
Agent: [Searches for time → Converts timezone → Provides answer]
toolcraft-agent/
├── app.py # Main entry point, agent initialization
├── Gradio_UI.py # Chat interface and message streaming
├── prompts.yaml # System prompts and LLM instructions
├── agent.json # Agent configuration
├── requirements.txt # Python dependencies
├── tools/
│ ├── final_answer.py # Final answer tool (handles PIL Images)
│ ├── web_search.py # DuckDuckGo search implementation
│ └── visit_webpage.py # Webpage fetching and parsing
└── README.md # This file
The agent writes Python code on-the-fly based on the task, executing it in a sandboxed environment with access to custom tools.
Variables and imports persist across steps, allowing the agent to build on previous work:
# Step 1
import pandas as pd
data = [1, 2, 3]
# Step 2 (can use previous variables!)
df = pd.DataFrame(data) # ✅ Still available- Generates images using Hugging Face's text-to-image models
- Converts PIL Images to file format automatically
- Displays images inline in the chat interface
- Graceful handling of failed web searches
- Automatic retry with different strategies
- Informative error messages for debugging
- Push code to Hugging Face Space
- Add
HF_TOKENas a Secret in Space settings - Space auto-rebuilds on push
python app.py| Variable | Description | Required |
|---|---|---|
HF_TOKEN |
Hugging Face API token | Yes |
HUGGINGFACE_HUB_TOKEN |
Alternative token name | Optional |
Key settings in app.py:
max_steps: Maximum reasoning steps (default: 10)model_id: LLM model identifiertemperature: Model temperature (default: 0.5)
If you hit rate limits, consider:
- Using a smaller model
- Adding credits to your Hugging Face account
- Using a different model endpoint
- Ensure
HF_TOKENis set correctly - Check that image generation tool is loaded
- Verify PIL/Pillow is installed
Contributions are welcome! Please feel free to submit a Pull Request.
This project is open source and available under the MIT License.
Shubhada Martha
- GitHub: @smartha2003
- Hugging Face: @smartha2003
- Built with smolagents framework
- Powered by Hugging Face
- UI built with Gradio
⭐ If you find this project interesting, please give it a star!