An AI-driven development platform that assists with software project management, planning, and deployment configuration.
Autonoma uses specialized AI agents to handle the complete software development lifecycle:
- COORD – Coordinator & Project Manager
- ANALYST – Requirements & UX
- TECHLEAD – Architecture & Stack
- DEVTEAM – Development + Testing
- SHIPPER – DevOps & Delivery
- SENTRY – Logging, Metrics & Alerts
- Non-technical Dashboard: Simple interface for operators to manage client projects
- AI-Powered Proposals: Automated generation of project proposals with MVP/Standard/Extended options
- Automated Architecture: TECHLEAD agent designs appropriate tech stack based on requirements
- Milestone Planning: DEVTEAM creates structured development milestones
- Assisted Deployment: SHIPPER generates deployment configurations, Docker Compose files, and deployment instructions (manual deployment required)
- Health Monitoring: SENTRY tracks project health, budget, and timeline
autonoma/
├── backend/ # Node.js + Express + Prisma backend
│ ├── src/
│ │ ├── agents/ # AI agent implementations
│ │ │ ├── coord/ # Coordinator agent
│ │ │ ├── analyst/ # Requirements agent
│ │ │ ├── techlead/ # Architecture agent
│ │ │ ├── devteam/ # Development agent
│ │ │ ├── shipper/ # Deployment agent
│ │ │ └── sentry/ # Monitoring agent
│ │ ├── config/ # Configuration
│ │ ├── db/ # Database client
│ │ ├── models/ # TypeScript types
│ │ ├── services/ # Business logic
│ │ ├── routes/ # API endpoints
│ │ └── utils/ # Utilities (LLM client, etc.)
│ └── prisma/ # Database schema
└── frontend/ # React + TypeScript + TailwindCSS
└── src/
├── components/ # Reusable components
├── pages/ # Page components
├── api/ # API client helpers
├── types/ # TypeScript types
└── layouts/ # Layout components
Choose your setup method based on your needs:
cd Autonoma/autonoma
cp .env.example .env
# Edit .env with your configuration (see docs/SETUP_GUIDE.md for production checklist)
docker-compose up -dAccess: Frontend (http://localhost) | Backend API (http://localhost:4000)
cd Autonoma/autonoma
# Backend
cd backend && npm install
cp .env.example .env # Edit with your configuration
npx prisma generate && npx prisma migrate dev
npm run dev # Runs on http://localhost:4000
# Frontend (new terminal)
cd ../frontend && npm install
echo 'VITE_API_BASE_URL=http://localhost:4000' > .env
npm run dev # Runs on http://localhost:5173For comprehensive setup instructions including:
- Pre-setup checklists (security, dependencies, prerequisites)
- Development vs Production differences and best practices
- Docker configuration for both dev and prod environments
- Troubleshooting common issues
- Post-setup verification steps
→ See docs/SETUP_GUIDE.md for the complete setup guide
Additional documentation:
- Navigate to http://localhost:5173
- Log in (default credentials will be created on first run)
- Click "New Project"
- Follow the wizard:
- Select or create a client
- Describe what the client wants
- Choose platforms and budget tier
- Review the AI-generated proposal
- Accept a scope option (MVP/Standard/Extended)
- View the automatically generated architecture and milestones
- Dashboard: View all active projects with health metrics
- Project Detail: See milestones, architecture, and metrics
- Proposal Review: Review and accept/revise proposals
- Deployment: Configure deployment settings and generate deployment configurations (Docker Compose files, environment configs, and step-by-step deployment instructions for manual execution)
POST /api/auth/login- LoginPOST /api/auth/logout- LogoutGET /api/auth/me- Get current user
GET /api/clients- List clientsPOST /api/clients- Create clientGET /api/clients/:id- Get client
GET /api/projects- List projectsPOST /api/projects- Create project (triggers COORD + ANALYST)GET /api/projects/:id- Get project detailPOST /api/projects/:id/recompute-metrics- Recompute SENTRY metrics
GET /api/projects/:projectId/proposals- List proposalsGET /api/proposals/:proposalId- Get proposalPOST /api/proposals/:proposalId/accept- Accept proposal (triggers TECHLEAD + DEVTEAM)POST /api/proposals/:proposalId/request-revision- Request revision
PATCH /api/milestones/:id- Update milestone statusGET /api/milestones/:id/detail- Get milestone detail with DEVTEAM summary
GET /api/projects/:projectId/deployment- Get deployment configPOST /api/projects/:projectId/deployment/preferences- Save deployment preferencesPOST /api/projects/:projectId/deployment/plan- Generate deployment plan (calls SHIPPER)
GET /api/dashboard/summary- Get dashboard summary with all projects
cd backend
npx prisma migrate dev --name migration_nameView and edit database data:
cd backend
npx prisma studio- Create agent directory in
backend/src/agents/ - Create agent file with system prompt and functions
- Import and use in services/routes
- Intake → Operator creates project
- Scoping → COORD + ANALYST generate structured proposal
- Proposal → Operator reviews and accepts scope
- Design → TECHLEAD creates architecture plan
- Planning → DEVTEAM creates milestones
- Development → Operator tracks milestone progress
- Deployment → SHIPPER generates deployment configurations and instructions (manual deployment required)
- Monitoring → SENTRY tracks health and alerts
COORD (orchestrates)
├── calls ANALYST (requirements)
├── calls TECHLEAD (architecture)
├── calls DEVTEAM (milestones)
├── calls SHIPPER (deployment)
└── calls SENTRY (health monitoring)
- STARTER Mode (default): Simple, constrained stack (React + Node + PostgreSQL)
- PRO Mode: Advanced patterns allowed (microservices, queues, etc.)
Configure per-agent models in .env:
LLM_MODEL_COORD=gpt-4
LLM_MODEL_ANALYST=gpt-4
LLM_MODEL_TECHLEAD=gpt-4
LLM_MODEL_DEVTEAM=gpt-4
LLM_MODEL_SHIPPER=gpt-4
LLM_MODEL_SENTRY=gpt-4
- Verify
LLM_API_URLis correct and accessible - Check
LLM_API_KEYis valid - Ensure firewall allows outbound connections
- Verify PostgreSQL is running
- Check
DATABASE_URLformat - Ensure database exists
- Check backend logs for detailed error messages
- Verify LLM responses are valid JSON
- Adjust agent prompts if needed
- Change
JWT_SECRETin production - Never commit
.envfiles - Use proper secrets management for production
- Encrypt LLM API keys in database
- Implement rate limiting for production
MIT
For issues and questions, please open an issue on GitHub.