Skip to content

mayafree-ai/AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

106 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title MAYA AI
emoji 🧬
colorFrom indigo
colorTo yellow
sdk static
pinned true
license apache-2.0
short_description MAYA AI — Proto_AGI, Evolutionary LLM Merging, Leaderboards & Chat

🧬 MAYA AI — Proto_AGI & Evolutionary LLM Platform

Building the path to Proto_AGI through evolutionary model merging, open benchmarks, and conversational intelligence.

Hugging Face — mayafree Hugging Face — MAYA-AI Org Website Twitter


🎯 What is MAYA AI?

MAYA AI is a research-driven AI lab pioneering the next generation of open-source language intelligence. We combine three pillars into a single mission:

Pillar Focus Flagship
🧬 Proto_AGI Research Evolutionary LLM model merging (Darwin V7 lineage) Darwin-31B-Opus / Darwin-27B-Opus
🏆 Open Benchmarks Transparent multi-benchmark leaderboards all-leaderboard
💬 Conversational AI Production-grade chat & voice interfaces QWEN-3_5-CHAT

🏆 Benchmark Highlights — Darwin V7

Our flagship Darwin V7 evolutionary model merging system has produced state-of-the-art results across multiple Korean and English benchmarks:

Model CLIcK KMMLU GPQA Notes
Darwin-31B-Opus 84.5 76.0 85.9 🥇 Overall #1
Darwin-27B-Opus 82.1 74.3 86.9 🥇 GPQA SOTA
Darwin-14B-Pro 79.8 71.5 82.3 Efficient tier
Lastbrain-MoE 78.2 69.8 80.1 MoE variant

13 models benchmarked across Darwin (8) + Lastbrain (3) + AETHER (2) lineages. See all-leaderboard for full rankings.

What makes Darwin different?

  • Evolutionary search over merge recipes — no gradient training required
  • Multi-objective optimization — balance Korean fluency, reasoning, and safety
  • Transparent lineage tracking — every model is traceable to its parents
  • Open weights — releasing under Apache 2.0 where base licenses permit

🚀 Live Demos on Hugging Face

Flagship Spaces

  • 🏆 all-leaderboard — Unified leaderboard across CLIcK, KMMLU, GPQA, HAE-RAE, and more (⭐ 28)
  • 💬 QWEN-3_5-CHAT — Qwen 3.5 conversational interface with HF OAuth (⭐ 16)
  • 🗣️ fish-s2-pro-zero — Zero-shot speech-to-speech voice cloning (⭐ 3)
  • 🤖 openclaw-moltbot — LLM battle arena chatbot (⭐ 6)

Benchmark Suites


🗺️ Repository Roadmap

github.com/mayafree-ai/
├── AI                    ← 🌐 Brand landing page (this repo)
├── Huggingface-MAYA      ← 🤗 Mirrored HF Space source code (4 flagship Spaces)
├── darwin-benchmark      ← 📊 (coming) Darwin V7 benchmark results & methodology
├── maya-leaderboard      ← 🏆 (coming) Open leaderboard aggregator
└── maya-chat             ← 💬 (coming) Chat interface deployment templates

📖 Medium Article Pipeline (shipping weekly)

We publish deep-dive technical articles on Medium covering:

  1. "Evolutionary LLM Merging: How Darwin V7 Builds Stronger Models Without Training" (Q2 2026)
  2. "Why GPQA 86.9% Matters: Small Models, Big Reasoning" (Q2 2026)
  3. "Open Leaderboards for Korean LLMs: A Unified View" (Q2 2026)
  4. "From Model Merging to Proto_AGI: The MAYA AI Research Agenda" (Q3 2026)
  5. "Building Production Chat Interfaces with HF Spaces OAuth" (Q3 2026)

👉 Follow us on Medium: (link coming soon)


🧪 Research & Engineering Focus Areas

🧬 Evolutionary Model Merging

Darwin V7 treats LLM weights as a genome. Merge recipes (TIES, DARE, SLERP, ties-dare) act as crossover operators; benchmark suites serve as fitness functions. The system evolves model populations across generations with elitism, mutation, and multi-objective Pareto selection.

🏆 Leaderboard Engineering

We believe leaderboards must be transparent, reproducible, and multi-dimensional. Our all-leaderboard aggregates results across:

  • Korean benchmarks: CLIcK, KMMLU, HAE-RAE, KoBEST
  • Reasoning benchmarks: GPQA, MMLU, ARC-Challenge
  • Chat/Instruction: MT-Bench, AlpacaEval

💬 Conversational Interfaces

Production chat systems require more than a nice prompt. Our chat Spaces ship with HF OAuth, rate limiting, streaming inference, and custom domain routing — a template for deploying LLM services responsibly.


🛠️ Tech Stack

  • Models: Qwen, Llama, Mistral base lineages
  • Merging: Custom evolutionary framework (Darwin V7)
  • Evaluation: lm-evaluation-harness, custom Korean suites
  • Serving: Hugging Face Spaces (Gradio, Docker, Static)
  • Infrastructure: AETHER (B200 GPU cluster)

🤝 Collaborate

We welcome research collaboration, benchmark contributions, and enterprise inquiries.


📜 License

This landing page repository is licensed under Apache License 2.0.

Individual models and Space sources retain their own licenses as declared in each Space/model card.


Last updated: 2026-04-22 · Maintained by MAYA AI Research

About

MAYA AI brand landing page ? Proto_AGI research, evolutionary LLM merging (Darwin V7), open leaderboards, conversational AI

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors