Skip to content

docs(automodel): add Nemotron-Nano-Omni V3 fine-tuning cookbook#170

Merged
marcromeyn merged 1 commit intoNVIDIA-NeMo:mainfrom
HuiyingLi:huiyingl/nano-v3-omni-automodel-cookbook
Apr 28, 2026
Merged

docs(automodel): add Nemotron-Nano-Omni V3 fine-tuning cookbook#170
marcromeyn merged 1 commit intoNVIDIA-NeMo:mainfrom
HuiyingLi:huiyingl/nano-v3-omni-automodel-cookbook

Conversation

@HuiyingLi
Copy link
Copy Markdown

Add the end-to-end NeMo AutoModel fine-tuning cookbook for Nemotron-Nano-Omni (V3, 30B-A3B-Reasoning) on CORD-v2 receipts.

Add the end-to-end NeMo AutoModel fine-tuning cookbook for
Nemotron-Nano-Omni (V3, 30B-A3B-Reasoning) on CORD-v2 receipts. Covers
full SFT and LoRA PEFT, environment setup, dataset exploration,
training, and inference.

This sits next to the existing GRPO assets under
usage-cookbook/Nemotron-3-Nano-Omni/ as the AutoModel-side counterpart.
Source: docs/guides/vlm/nemotron-omni.md in NVIDIA-NeMo/Automodel.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: HuiyingLi <willwin.lee@gmail.com>
@marcromeyn marcromeyn merged commit ba04b5e into NVIDIA-NeMo:main Apr 28, 2026
5 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants