AutoMCP is a powerful platform that automatically generates ready-to-use Model Context Protocol (MCP) servers from your backend API documentation. Simply upload your OpenAPI, Swagger, or Postman collection, and AutoMCP uses Google's Gemini AI to generate a complete, production-ready MCP server package.
- π€ AI-Powered Generation: Uses Google Gemini AI to generate high-quality MCP server code
- π Multiple Format Support: Supports OpenAPI 3.x, Swagger 2.x, and Postman Collections
- π§ Complete Package Generation: Generates full MCP server with:
- TypeScript tool implementations
- Type definitions
- Configuration files
- Package.json with dependencies
- Comprehensive README
- π― Smart Parsing: Automatically extracts endpoints, parameters, schemas, and authentication schemes
- π¦ Ready-to-Use: Generated packages are immediately deployable
- π Secure: Local file storage, no cloud dependencies
- π Fast: Efficient parsing and code generation pipeline
- Node.js >= 18.x
- npm >= 9.x
- Google Gemini API Key (Get one here)
-
Clone the repository
git clone <repository-url> cd AutoMCP
-
Install dependencies
npm install
-
Set up environment variables
cp .env.example .env
Edit
.envand add your Gemini API key:PORT=3000 GEMINI_API_KEY=your_gemini_api_key_here GEMINI_MODEL=gemini-1.5-pro GEMINI_MAX_TOKENS=32768 MAX_FILE_SIZE=10485760
-
Build the project
npm run build
| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 3000 |
GEMINI_API_KEY |
Google Gemini API key | Required |
GEMINI_MODEL |
Gemini model to use | gemini-1.5-pro |
GEMINI_TEMPERATURE |
Generation temperature | 0.7 |
GEMINI_MAX_TOKENS |
Maximum tokens per request | 32768 |
MAX_FILE_SIZE |
Max upload file size (bytes) | 10485760 (10MB) |
NODE_ENV |
Environment mode | development |
gemini-1.5-pro(Recommended)gemini-progemini-1.5-flash
To list available models:
npm run list-modelsDevelopment mode:
npm run devProduction mode:
npm run build
npm startThe server will start on http://localhost:3000
curl -X POST http://localhost:3000/api/upload \
-F "file=@your-api-docs.json"Response:
{
"status": "success",
"fileId": "8f4be01f-236d-4f67-8b70-6cf347b81d12",
"filename": "pet-store-openapi.json",
"metadata": {
"title": "Swagger Petstore",
"version": "1.0.0",
"endpoints": 19
}
}curl -X POST http://localhost:3000/api/generate \
-H "Content-Type: application/json" \
-d '{
"fileId": "8f4be01f-236d-4f67-8b70-6cf347b81d12"
}'Response:
{
"status": "success",
"packageId": "8f4be01f-236d-4f67-8b70-6cf347b81d12",
"packageName": "mcp-swagger-petstore---openapi-3-0",
"downloadUrl": "/api/download/8f4be01f-236d-4f67-8b70-6cf347b81d12",
"toolsGenerated": 19
}curl http://localhost:3000/api/status/8f4be01f-236d-4f67-8b70-6cf347b81d12curl -O http://localhost:3000/api/download/8f4be01f-236d-4f67-8b70-6cf347b81d12Or open in browser:
http://localhost:3000/api/download/8f4be01f-236d-4f67-8b70-6cf347b81d12
Import the provided Postman collection (AutoMCP.postman_collection.json) for easy testing. See POSTMAN_SETUP.md for details.
Health check endpoint.
Response:
{
"status": "ok",
"timestamp": "2024-01-15T10:30:00.000Z",
"uptime": 3600
}Upload API documentation file (OpenAPI/Swagger/Postman).
Request:
- Method:
POST - Content-Type:
multipart/form-data - Body:
file(file field)
Response:
{
"status": "success",
"fileId": "uuid",
"filename": "api-docs.json",
"metadata": { ... }
}Generate MCP server from uploaded file.
Request:
{
"fileId": "uuid"
}Response:
{
"status": "success",
"packageId": "uuid",
"packageName": "mcp-api-name",
"downloadUrl": "/api/download/uuid",
"toolsGenerated": 19
}Download generated MCP server package as ZIP.
Response:
- Content-Type:
application/zip - File:
mcp-server-package.zip
Get generation status of a package.
Response:
{
"status": "completed",
"packageId": "uuid",
"packageName": "mcp-api-name",
"toolsGenerated": 19,
"createdAt": "2024-01-15T10:30:00.000Z"
}AutoMCP
βββ API Layer (Express)
β βββ Upload Controller
β βββ Generate Controller
β βββ Download Controller
βββ Parser Layer
β βββ OpenAPI Parser
β βββ Schema Resolver
β βββ Data Transformer
βββ Gemini Integration
β βββ Gemini Client (with retry logic)
β βββ Code Generator
β βββ Prompt Templates
βββ Generator Layer
β βββ MCP Package Generator
β βββ Config Template Generator
βββ Storage Layer
βββ File Storage (Local)
βββ Package Storage (Local)
- Upload β File saved to temp storage
- Parse β OpenAPI/Swagger parsed into structured format
- Transform β Data normalized for Gemini prompts
- Generate β Gemini AI generates MCP server code
- Assemble β Code assembled into complete package
- Package β ZIP archive created
- Download β Package available for download
unzip mcp-server-package.zip
cd mcp-server-package
npm installCreate .env file:
API_BASE_URL=https://your-api.com/v2
API_KEY=your_api_keynpm run buildnpx @modelcontextprotocol/inspector node dist/index.jsThis opens a web UI at http://localhost:5173 where you can test all generated tools.
node -e "
import('./dist/tools/getPetById.js').then(m =>
m.getPetById({petId: 1})
.then(console.log)
.catch(console.error)
);
"src/
βββ api/ # API routes and controllers
βββ config/ # Configuration management
βββ gemini/ # Gemini AI integration
βββ generators/ # MCP package generators
βββ middleware/ # Express middleware
βββ parsers/ # API documentation parsers
βββ transformers/ # Data transformers
βββ types/ # TypeScript type definitions
βββ utils/ # Utility functions
βββ server.ts # Main server entry point
# Development with hot reload
npm run dev
# Build TypeScript
npm run build
# Production start
npm start
# List available Gemini models
npm run list-models- TypeScript with strict mode
- ES2020 target
- CommonJS modules
- Express.js for API
- Async/await for async operations
Error: 404 Not Found - models/gemini-X is not found
Solution:
- Check available models:
npm run list-models - Update
GEMINI_MODELin.envto a valid model - The system will auto-detect and use available models
Error: File too large
Solution:
- Increase
MAX_FILE_SIZEin.env - Or compress your API documentation file
Error: Generation timeout
Solution:
- Increase
GEMINI_MAX_TOKENSin.env - Split large APIs into smaller services
- Check Gemini API quota limits
Error: API error: 500 Internal Server Error
Solution:
- Check the API base URL in generated
config.ts - Verify API authentication credentials
- Test the API directly with
curl
Enable verbose logging:
DEBUG=* npm run devmcp-api-name/
βββ src/
β βββ index.ts # MCP server entry point
β βββ config.ts # Configuration
β βββ types.ts # TypeScript types
β βββ tools/ # Generated tool implementations
β βββ getPetById.ts
β βββ addPet.ts
β βββ ...
βββ package.json # Dependencies
βββ tsconfig.json # TypeScript config
βββ .env.example # Environment template
βββ .gitignore
βββ README.md # Usage instructions
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
ISC License
- Model Context Protocol - The MCP specification
- Google Gemini AI - AI code generation
- OpenAPI Initiative - API specification standard
For issues, questions, or contributions:
- Open an issue on GitHub
- Check existing documentation
- Review troubleshooting section
- Support for Postman Collections
- Swagger 2.x support improvements
- Custom prompt templates
- Batch generation
- Cloud storage integration
- Web UI dashboard
- API versioning support
- Authentication scheme improvements
Built with β€οΈ for the hackathon