Peer-to-Peer GPU Compute Marketplace β Built for Students, by Students
UniGPU is a peer-to-peer GPU sharing platform that connects students who need compute power with students who have idle GPUs.
The Problem: High-performance GPUs are expensive and inaccessible to many students. Training ML models requires powerful hardware most individuals can't afford β yet thousands of student GPUs sit idle every day.
The Solution: UniGPU lets you share your idle GPU and earn credits, or submit training jobs that run on someone else's GPU. Every job is executed inside a secure, isolated Docker container with NVIDIA GPU runtime. The platform handles scheduling, execution, real-time log streaming, and usage-based billing β all automatically.
βββββββββββββββ REST API ββββββββββββββββββββββββββββββββββββ
β Frontend β ββββββββββββββββΊ β Backend (Docker) β
β Vite+React β localhost:5173 β β
β :5173 β β FastAPI (:8000) β
βββββββββββββββ WebSocket β PostgreSQL (:5432) β
β Redis (:6379) β
βββββββββββββββ ββββββββββββββββΊ β Celery Worker (job matching) β
β GPU Agent β ws://...:8000 ββββββββββββββββββββββββββββββββββββ
β (Python) β
β Student PC β
βββββββββββββββ
| Layer | Technology |
|---|---|
| Frontend | React 19, Vite 7, React Router, FontAwesome, Cloudinary |
| Backend API | FastAPI (Python), async SQLAlchemy, Pydantic |
| Database | PostgreSQL 16 |
| Cache / Broker | Redis 7 |
| Task Queue | Celery (job matching, heartbeat monitoring) |
| Auth | JWT (jose), bcrypt password hashing |
| Agent | Python (websockets, docker SDK, pynvml, pystray) |
| Job Isolation | Docker containers with NVIDIA GPU runtime |
| Infrastructure | Docker Compose (4 services) |
Before you begin, ensure you have the following installed:
- Docker Desktop β for running the backend stack
- Node.js 18+ β for the frontend
- Python 3.10+ β for the GPU agent
- Git
git clone https://github.com/IammSwanand/UniGPU.git
cd UniGPUThis spins up 4 Docker containers: PostgreSQL, Redis, FastAPI, and Celery Worker.
docker compose up --buildWindows users: If
dockeris not found, add Docker to PATH first:$env:PATH += ";C:\Program Files\Docker\Docker\resources\bin"
Wait until you see the backend is ready (FastAPI logs will show Uvicorn running on 0.0.0.0:8000).
cd frontend
npm install # first time only
npm run devThe app opens at http://localhost:5173
Visit http://localhost:5173/register and sign up as either a Client or Provider.
As a Client, you want to run GPU-intensive workloads (ML training scripts) without owning a GPU.
- Register / Login as a Client
- Top up your wallet with credits from the Client Dashboard
- Upload your training script (
.pyfile) and optionally arequirements.txt - Select a GPU from the available GPUs list, or let the system auto-assign
- Submit the job β it enters a queue and gets matched to an available GPU
- Monitor progress β view real-time logs via the "π Logs" button
- Download logs β once complete, click "β¬ Download" in the log viewer to save output as
.txt - Billing β credits are deducted based on GPU usage time (βΉ0.002/second)
Client uploads script β Backend queues job β Celery matches to GPU
β Agent receives job β Runs in Docker β Streams logs β Done
As a Provider, you share your idle GPU with the network and earn credits for every job it runs.
- Register / Login as a Provider
- Download the UniGPU Agent from the Download page
- Run the Agent β double-click the
.exe, the setup wizard guides you through:- Detecting your GPU hardware (name, VRAM, CUDA version)
- Registering your GPU with the backend
- Configuring the WebSocket connection
- Go Online β your GPU appears in the marketplace and starts accepting jobs
- Monitor from Dashboard β the Provider Dashboard shows:
- GPU status & health metrics (utilization, temperature, memory)
- Real-time agent logs
- Earnings & transaction history
- Jobs run automatically β the agent receives jobs, runs them in Docker containers, streams logs, and reports results
- Go Offline anytime β toggle from the dashboard to pause accepting jobs
Agent starts β WebSocket connects β Heartbeats keep GPU "online"
β Job assigned β Docker container runs β Logs streamed β Credits earned
- NVIDIA GPU with CUDA support
- Docker Desktop with NVIDIA Container Toolkit
- Windows 10/11 (Linux/macOS coming soon)
UniGPU/
βββ backend/ # FastAPI backend
β βββ app/
β β βββ main.py # App entry + CORS + routers
β β βββ config.py # Settings (DB, Redis, JWT)
β β βββ database.py # Async SQLAlchemy setup
β β βββ deps.py # Auth dependencies (JWT)
β β βββ models/ # SQLAlchemy models
β β βββ schemas/ # Pydantic request/response schemas
β β βββ routers/ # API endpoints
β β β βββ auth.py # Register, Login
β β β βββ gpus.py # GPU registration & status
β β β βββ jobs.py # Job submission & management
β β β βββ wallet.py # Wallet & transactions
β β β βββ ws.py # WebSocket for agent communication
β β βββ services/ # Business logic
β β βββ billing.py # Usage-based billing
β β βββ matching.py # Job-to-GPU matching
β β βββ connection_manager.py
β βββ Dockerfile
β βββ requirements.txt
β
βββ frontend/ # Vite + React SPA
β βββ src/
β βββ api/client.js # API wrapper (Axios-like)
β βββ context/ # Auth context (JWT storage)
β βββ components/ # Shared components (Sidebar)
β βββ pages/ # Route pages
β βββ Landing.jsx # Homepage
β βββ Login.jsx # Authentication
β βββ Register.jsx # Account creation
β βββ ClientDashboard.jsx # Job submission & wallet
β βββ ProviderDashboard.jsx # GPU monitoring & earnings
β βββ Download.jsx # Agent download page
β βββ AboutUs.jsx # Team info
β βββ HowToUse.jsx # User guide
β
βββ docker-compose.yml # Backend stack orchestration
Full interactive API docs available at http://localhost:8000/docs (Swagger UI).
| Method | Endpoint | Description |
|---|---|---|
POST |
/auth/register |
Register a new user |
POST |
/auth/login |
Login β returns JWT token |
| Method | Endpoint | Description |
|---|---|---|
GET |
/gpus/ |
List all GPUs |
GET |
/gpus/available |
List online GPUs |
POST |
/gpus/register |
Register a GPU (provider) |
PATCH |
/gpus/{id}/status |
Set GPU online/offline |
| Method | Endpoint | Description |
|---|---|---|
POST |
/jobs/submit |
Upload script + requirements |
GET |
/jobs/ |
List user's jobs |
GET |
/jobs/{id}/logs |
Get job logs |
DELETE |
/jobs/{id} |
Delete a job |
| Method | Endpoint | Description |
|---|---|---|
GET |
/wallet/ |
Get balance |
POST |
/wallet/topup |
Add credits |
GET |
/wallet/transactions |
Transaction history |
| Endpoint | Description |
|---|---|
ws://localhost:8000/ws/agent/{gpu_id} |
Real-time agent β backend communication |
| Action | Command |
|---|---|
| Start backend | docker compose up --build |
| Start backend (detached) | docker compose up --build -d |
| Stop backend | docker compose down |
| Stop + wipe database | docker compose down -v |
| View backend logs | docker compose logs backend -f |
| Start frontend | cd frontend && npm run dev |
| Build frontend | cd frontend && npm run build |
- Fork the repository
- Clone your fork:
git clone https://github.com/<your-username>/UniGPU.git - Create a branch:
git checkout -b feature/your-feature - Set up the backend:
docker compose up --build - Set up the frontend:
cd frontend && npm install && npm run dev - Make your changes and test locally
- Push and open a Pull Request
This project is proprietary software. All rights reserved.
Unauthorized copying, modification, distribution, or use of this software, in whole or in part, is strictly prohibited without explicit written permission from the authors.
UniGPU is built with β€οΈ by:
| Name | Role | GitHub | |
|---|---|---|---|
| π§ | Swanand Wakadmane | Co-founder & Developer | @IammSwanand |
| π» | Sujal Kadam | Co-founder & Developer | @withonly-sujal |
AI & Data Science / Information Technology Engineering Undergraduates, Class of 2027
⬑ UniGPU β Peer-to-Peer GPU Marketplace
Built for Students Β· By Students
Β© 2026 UniGPU. All rights reserved.