| title | MAYA AI |
|---|---|
| emoji | 🧬 |
| colorFrom | indigo |
| colorTo | yellow |
| sdk | static |
| pinned | true |
| license | apache-2.0 |
| short_description | MAYA AI — Proto_AGI, Evolutionary LLM Merging, Leaderboards & Chat |
Building the path to Proto_AGI through evolutionary model merging, open benchmarks, and conversational intelligence.
MAYA AI is a research-driven AI lab pioneering the next generation of open-source language intelligence. We combine three pillars into a single mission:
| Pillar | Focus | Flagship |
|---|---|---|
| 🧬 Proto_AGI Research | Evolutionary LLM model merging (Darwin V7 lineage) | Darwin-31B-Opus / Darwin-27B-Opus |
| 🏆 Open Benchmarks | Transparent multi-benchmark leaderboards | all-leaderboard |
| 💬 Conversational AI | Production-grade chat & voice interfaces | QWEN-3_5-CHAT |
Our flagship Darwin V7 evolutionary model merging system has produced state-of-the-art results across multiple Korean and English benchmarks:
| Model | CLIcK | KMMLU | GPQA | Notes |
|---|---|---|---|---|
| Darwin-31B-Opus | 84.5 | 76.0 | 85.9 | 🥇 Overall #1 |
| Darwin-27B-Opus | 82.1 | 74.3 | 86.9 | 🥇 GPQA SOTA |
| Darwin-14B-Pro | 79.8 | 71.5 | 82.3 | Efficient tier |
| Lastbrain-MoE | 78.2 | 69.8 | 80.1 | MoE variant |
13 models benchmarked across Darwin (8) + Lastbrain (3) + AETHER (2) lineages. See all-leaderboard for full rankings.
- Evolutionary search over merge recipes — no gradient training required
- Multi-objective optimization — balance Korean fluency, reasoning, and safety
- Transparent lineage tracking — every model is traceable to its parents
- Open weights — releasing under Apache 2.0 where base licenses permit
- 🏆 all-leaderboard — Unified leaderboard across CLIcK, KMMLU, GPQA, HAE-RAE, and more (⭐ 28)
- 💬 QWEN-3_5-CHAT — Qwen 3.5 conversational interface with HF OAuth (⭐ 16)
- 🗣️ fish-s2-pro-zero — Zero-shot speech-to-speech voice cloning (⭐ 3)
- 🤖 openclaw-moltbot — LLM battle arena chatbot (⭐ 6)
- 📊 all-bench — Aggregated benchmark landing
- 📝 EXAM-FINALBENCH — Korean exam-based evaluation
github.com/mayafree-ai/
├── AI ← 🌐 Brand landing page (this repo)
├── Huggingface-MAYA ← 🤗 Mirrored HF Space source code (4 flagship Spaces)
├── darwin-benchmark ← 📊 (coming) Darwin V7 benchmark results & methodology
├── maya-leaderboard ← 🏆 (coming) Open leaderboard aggregator
└── maya-chat ← 💬 (coming) Chat interface deployment templates
We publish deep-dive technical articles on Medium covering:
- "Evolutionary LLM Merging: How Darwin V7 Builds Stronger Models Without Training" (Q2 2026)
- "Why GPQA 86.9% Matters: Small Models, Big Reasoning" (Q2 2026)
- "Open Leaderboards for Korean LLMs: A Unified View" (Q2 2026)
- "From Model Merging to Proto_AGI: The MAYA AI Research Agenda" (Q3 2026)
- "Building Production Chat Interfaces with HF Spaces OAuth" (Q3 2026)
👉 Follow us on Medium: (link coming soon)
Darwin V7 treats LLM weights as a genome. Merge recipes (TIES, DARE, SLERP, ties-dare) act as crossover operators; benchmark suites serve as fitness functions. The system evolves model populations across generations with elitism, mutation, and multi-objective Pareto selection.
We believe leaderboards must be transparent, reproducible, and multi-dimensional. Our all-leaderboard aggregates results across:
- Korean benchmarks: CLIcK, KMMLU, HAE-RAE, KoBEST
- Reasoning benchmarks: GPQA, MMLU, ARC-Challenge
- Chat/Instruction: MT-Bench, AlpacaEval
Production chat systems require more than a nice prompt. Our chat Spaces ship with HF OAuth, rate limiting, streaming inference, and custom domain routing — a template for deploying LLM services responsibly.
- Models: Qwen, Llama, Mistral base lineages
- Merging: Custom evolutionary framework (Darwin V7)
- Evaluation:
lm-evaluation-harness, custom Korean suites - Serving: Hugging Face Spaces (Gradio, Docker, Static)
- Infrastructure: AETHER (B200 GPU cluster)
We welcome research collaboration, benchmark contributions, and enterprise inquiries.
- 🐦 Twitter: @mayafree_ai
- 🤗 Hugging Face: mayafree | MAYA-AI
- 🌐 Website: mayafree.ai
- 📧 Email:
mayafreeai@gmail.com
This landing page repository is licensed under Apache License 2.0.
Individual models and Space sources retain their own licenses as declared in each Space/model card.
Last updated: 2026-04-22 · Maintained by MAYA AI Research