|
22315 | 22315 | - filename: Simia-Tau-SFT-Qwen3-8B.Q4_K_S.gguf |
22316 | 22316 | sha256: b1019b160e4a612d91edd77f00bea01f3f276ecc8ab76de526b7bf356d4c8079 |
22317 | 22317 | uri: huggingface://mradermacher/Simia-Tau-SFT-Qwen3-8B-GGUF/Simia-Tau-SFT-Qwen3-8B.Q4_K_S.gguf |
| 22318 | +- !!merge <<: *qwen3 |
| 22319 | + name: "qwen3-coder-reap-25b-a3b-i1" |
| 22320 | + urls: |
| 22321 | + - https://huggingface.co/mradermacher/Qwen3-Coder-REAP-25B-A3B-i1-GGUF |
| 22322 | + description: | |
| 22323 | + **Model Name:** Qwen3-Coder-REAP-25B-A3B (Base Model: cerebras/Qwen3-Coder-REAP-25B-A3B) |
| 22324 | + **Model Type:** Large Language Model (LLM) for Code Generation |
| 22325 | + **Architecture:** Mixture-of-Experts (MoE) – Qwen3-Coder variant |
| 22326 | + **Size:** 25B parameters (with 3 active experts at inference time) |
| 22327 | + **License:** Apache 2.0 |
| 22328 | + **Library:** Hugging Face Transformers |
| 22329 | + **Language Support:** Primarily English, optimized for coding tasks across multiple programming languages |
| 22330 | + |
| 22331 | + **Description:** |
| 22332 | + The **Qwen3-Coder-REAP-25B-A3B** is a high-performance, open-source, Mixture-of-Experts (MoE) language model developed by Cerebras Systems, specifically fine-tuned for advanced code generation and reasoning. Built on the Qwen3 architecture, this model excels in understanding complex codebases, generating syntactically correct and semantically meaningful code, and solving programming challenges across diverse domains. |
| 22333 | + |
| 22334 | + This version is the **original, unquantized base model** and serves as the foundation for various quantized GGUF variants (e.g., by mradermacher), which are optimized for local inference with reduced memory footprint while preserving strong performance. |
| 22335 | + |
| 22336 | + Ideal for developers, AI researchers, and engineers working on code completion, debugging, documentation generation, and automated software development workflows. |
| 22337 | + |
| 22338 | + ✅ **Key Features:** |
| 22339 | + - State-of-the-art code generation |
| 22340 | + - 25B parameter scale with expert routing |
| 22341 | + - MoE architecture for efficient inference |
| 22342 | + - Full compatibility with Hugging Face Transformers |
| 22343 | + - Designed for real-world coding tasks |
| 22344 | + |
| 22345 | + **Base Model Repository:** [cerebras/Qwen3-Coder-REAP-25B-A3B](https://huggingface.co/cerebras/Qwen3-Coder-REAP-25B-A3B) |
| 22346 | + **Quantized Versions:** Available via [mradermacher/Qwen3-Coder-REAP-25B-A3B-i1-GGUF](https://huggingface.co/mradermacher/Qwen3-Coder-REAP-25B-A3B-i1-GGUF) (for local inference with GGUF) |
| 22347 | + |
| 22348 | + > 🔍 **Note:** The quantized versions (e.g., GGUF) are optimized for performance on consumer hardware and are not the original model. For the full, unquantized model description, refer to the base model above. |
| 22349 | + overrides: |
| 22350 | + parameters: |
| 22351 | + model: Qwen3-Coder-REAP-25B-A3B.i1-Q4_K_S.gguf |
| 22352 | + files: |
| 22353 | + - filename: Qwen3-Coder-REAP-25B-A3B.i1-Q4_K_S.gguf |
| 22354 | + sha256: 3d96af010d07887d0730b0f681572ebb3a55e21557f30443211bc39461e06d5d |
| 22355 | + uri: huggingface://mradermacher/Qwen3-Coder-REAP-25B-A3B-i1-GGUF/Qwen3-Coder-REAP-25B-A3B.i1-Q4_K_S.gguf |
0 commit comments