This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infrastructure.
-
Updated
Feb 8, 2024 - Jupyter Notebook
This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infrastructure.
[ Text Analytics ] 법률 도메인 특화 한국어 기반 LLM 개발
Hybrid Mamba-2 + Transformer 2.94B LLM (Nemotron-H style) — Korean 3B model pretrained from scratch on 7× NVIDIA B200 GPUs with SFT + DPO alignment
🇰🇷 Korean Sovereign AI launcher for Apple Silicon · 한국 소버린 AI 로컬 실행 (LG EXAONE / SKT A.X / Upstage Solar) · MLX + Ollama · 자동 설치 / 메뉴 / 메모리 관리 · Mac M5 Pro 24GB 최적화
Instruction fine-tuned auto-regressive language model, based on the SOLAR transformer architecture
Korean 3B LLM (pure Transformer) pretrained from scratch on 8× NVIDIA B200 GPUs with SFT + ORPO alignment
MAYA AI brand landing page ? Proto_AGI research, evolutionary LLM merging (Darwin V7), open leaderboards, conversational AI
[ Text Analytics ] 법률 도메인 특화 한국어 기반 LLM 개발
Add a description, image, and links to the korean-llm topic page so that developers can more easily learn about it.
To associate your repository with the korean-llm topic, visit your repo's landing page and select "manage topics."