Skip to content

Vivekk0712/dataset_agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 

Repository files navigation

🧠 AutoML Multi-Agent System (MCP + Supabase + PyTorch + GCP) πŸ“ Team Size: 4 Members Each member builds one independent AI Agent.


πŸš€ Overview This project is a multi-agent AutoML pipeline built using: β€’ MCP Server (for orchestration and chatbot integration) β€’ Supabase (for centralized database + message storage) β€’ Google Cloud Storage (GCP) (for dataset & model storage) β€’ PyTorch (for model training & evaluation) β€’ Gemini LLM (for reasoning in Planner Agent) Each agent handles one stage of the ML workflow β€” from dataset discovery to final evaluation β€” and all communication happens through Supabase tables (no direct API calls between agents).


βš™οΈ System Architecture User ↓ MCP Server (chatbot) β”œβ”€β”€ Planner Agent β†’ creates project plan β”œβ”€β”€ Dataset Agent β†’ fetches & uploads dataset β”œβ”€β”€ Training Agent β†’ trains model locally └── Evaluation Agent β†’ evaluates trained model ↓ Supabase (Database) ↓ GCP Bucket (Storage)


🧩 Agent Responsibilities Agent Member Description 🧠 Planner Agent Member 1 Interprets user intent (via Gemini), creates project plan in Supabase (projects table). πŸ“¦ Dataset Agent Member 2 Authenticates Kaggle, downloads dataset, uploads to GCP, updates datasets table. βš™οΈ Training Agent Member 3 Downloads dataset from GCP, trains PyTorch model locally, uploads model to GCP, updates models. πŸ“Š Evaluation Agent Member 4 Evaluates trained model using test data, logs accuracy and metrics, marks project as completed.


🧱 Database Schema (Supabase) Core Tables create table if not exists projects ( id uuid primary key default gen_random_uuid(), user_id uuid references users(id) on delete cascade, name text not null, task_type text not null, framework text default 'pytorch', dataset_source text default 'kaggle', search_keywords text[], status text default 'draft', metadata jsonb default '{}'::jsonb, created_at timestamptz default now(), updated_at timestamptz default now() );

create table if not exists datasets ( id uuid primary key default gen_random_uuid(), project_id uuid references projects(id) on delete cascade, name text, gcs_url text, size text, source text default 'kaggle', created_at timestamptz default now() );

create table if not exists models ( id uuid primary key default gen_random_uuid(), project_id uuid references projects(id) on delete cascade, name text, framework text default 'pytorch', gcs_url text, accuracy numeric, metadata jsonb default '{}'::jsonb, created_at timestamptz default now() );

create table if not exists agent_logs ( id uuid primary key default gen_random_uuid(), project_id uuid references projects(id) on delete cascade, agent_name text, message text, log_level text default 'info', created_at timestamptz default now() ); Existing Chat Tables (already in your MCP) users, messages, embeddings


☁️ GCP Bucket Structure gs://automl-datasets/ β”œβ”€β”€ raw/ β”‚ β”œβ”€β”€ plantvillage.zip β”‚ β”œβ”€β”€ chestxray.zip β”œβ”€β”€ models/ β”‚ β”œβ”€β”€ plantvillage_model.pth └── temp/ β”œβ”€β”€ intermediate/ Naming convention: β€’ Dataset files: raw/{dataset_name}.zip β€’ Models: models/{project_name}_model.pth


⚑ Workflow Summary Step Agent Input Output Supabase Status 1️⃣ Planner Agent User message JSON project plan pending_dataset 2️⃣ Dataset Agent Project ID GCS dataset URL pending_training 3️⃣ Training Agent Dataset URL GCS model file pending_evaluation 4️⃣ Evaluation Agent Model + dataset Accuracy + metrics completed All coordination happens through projects.status.


🧩 MCP Server Integration Folder Structure AutoML-MCP-Agents/ β”œβ”€β”€ mcp_server/ β”‚ └── main.py β”œβ”€β”€ agents/ β”‚ β”œβ”€β”€ planner/ β”‚ β”‚ β”œβ”€β”€ main.py β”‚ β”‚ └── architecture.md β”‚ β”œβ”€β”€ dataset/ β”‚ β”‚ β”œβ”€β”€ main.py β”‚ β”‚ └── architecture.md β”‚ β”œβ”€β”€ training/ β”‚ β”‚ β”œβ”€β”€ main.py β”‚ β”‚ └── architecture.md β”‚ └── evaluation/ β”‚ β”œβ”€β”€ main.py β”‚ └── architecture.md β”œβ”€β”€ README.md ← (this file) β”œβ”€β”€ requirements.txt └── .env


🧠 MCP Configuration (Example) In mcp.yaml or config.json: tools:

  • name: planner path: ./agents/planner/main.py
  • name: dataset path: ./agents/dataset/main.py
  • name: training path: ./agents/training/main.py
  • name: evaluation path: ./agents/evaluation/main.py Each tool registers itself when the MCP Server starts.

πŸ”‘ Environment Variables Create a .env file at root: SUPABASE_URL= SUPABASE_KEY= GCP_BUCKET_NAME= GOOGLE_APPLICATION_CREDENTIALS=/path/to/service_account.json GEMINI_API_KEY= MCP_API_KEY= LOG_LEVEL=INFO Each agent reads the same env file (shared configs).


🧰 Local Setup Instructions 1️⃣ Clone the repo git clone https://github.com//AutoML-MCP-Agents.git cd AutoML-MCP-Agents 2️⃣ Create a Python environment python -m venv venv source venv/bin/activate # (Linux/Mac) venv\Scripts\activate # (Windows) 3️⃣ Install dependencies pip install -r requirements.txt 4️⃣ Run the MCP server cd mcp_server uvicorn main:app --reload 5️⃣ Run an agent (example) cd ../agents/training python main.py Each agent can be run locally or inside a lightweight Docker container.


🧩 How Agents Communicate All agents are stateless and interact through Supabase: β€’ Planner inserts β†’ projects β€’ Dataset reads β†’ inserts β†’ updates status β€’ Training reads β†’ inserts model β†’ updates status β€’ Evaluation reads β†’ updates metrics β†’ finalizes project


🧾 Testing End-to-End Stage Input Expected Outcome 🧠 Planner β€œTrain a PyTorch model for tomato leaves” Project appears in Supabase with pending_dataset. πŸ“¦ Dataset Kaggle key uploaded Dataset uploaded to GCP; status β†’ pending_training. βš™οΈ Training Trigger by MCP Model trained locally, uploaded to GCP; status β†’ pending_evaluation. πŸ“Š Evaluation Auto-trigger Metrics computed, status β†’ completed. βœ… Output Chatbot shows: β€œModel accuracy 93.8%. Project complete!”


🧩 Team Member Division Member Agent Key Skills Used 1️⃣ Planner Agent LLM integration, Supabase schema design 2️⃣ Dataset Agent Kaggle API, GCP uploads, data management 3️⃣ Training Agent PyTorch model training, file upload 4️⃣ Evaluation Agent Model evaluation, metric computation


πŸ” Security Guidelines β€’ Never store user kaggle.json beyond the session. β€’ Restrict Supabase service keys (write-only for agents). β€’ Use least-privilege service accounts for GCP uploads. β€’ Validate all Supabase input before insert/update. β€’ Ensure model training runs locally in isolated environment (no untrusted code).


🧭 Future Enhancements β€’ Add Auto Hyperparameter Tuner Agent. β€’ Introduce Model Comparison Dashboard (Supabase + Streamlit). β€’ Add Docker Compose file for one-click setup. β€’ Add RAG Agent later (to remember past model results). β€’ Enable optional GPU cloud training via RunPod or Vertex AI.


βœ… End-to-End Summary Layer Description Frontend MCP Chatbot for user interaction Middleware MCP Server routes requests to correct agent Backend 4 independent AI Agents (Planner, Dataset, Training, Evaluation) Database Supabase stores metadata, messages, logs Storage GCP bucket stores large datasets & trained models Execution Local PyTorch for training & evaluation Output Metrics + accuracy summary displayed in chat


πŸ“Έ Example Final Flow User: "Train a PyTorch model for plant disease detection" ↓ Planner Agent β†’ Creates project plan ↓ Dataset Agent β†’ Fetches dataset from Kaggle β†’ Uploads to GCP ↓ Training Agent β†’ Downloads dataset β†’ Trains model β†’ Uploads .pth to GCP ↓ Evaluation Agent β†’ Evaluates model β†’ Updates Supabase ↓ Chatbot β†’ "βœ… Training complete. Accuracy: 93.8%."

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors