Fix: Configure gradient accumulation and chunking collator for stable training#301
Fix: Configure gradient accumulation and chunking collator for stable training#301quangkmhd wants to merge 1 commit intofacebookresearch:mainfrom
Conversation
|
Hi @quangkmhd! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
There was a problem hiding this comment.
Pull request overview
This PR configures gradient accumulation to stabilize training and prevent numeric errors when training with the Roboflow v100 dataset. The configuration now splits batches into micro-batches for accumulated gradient updates.
Key Changes:
- Increased gradient accumulation steps and batch size from 1 to 16
- Switched to chunking collator to support gradient accumulation
- Linked num_chunks parameter to gradient_accumulation_steps for consistency
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
drduhe
left a comment
There was a problem hiding this comment.
Thanks - this seems a lot more logical / scalable than the previous defaults.
Summary
This PR updates the training configuration to enable gradient accumulation. This change stabilizes the training loss and prevents the
ValueError: matrix contains invalid numeric entrieserror, which often occurs due to unstable gradients when training with small batch sizes.Changes
gradient_accumulation_stepsto 16.train_batch_sizeto 16.sam3.train.data.collator.collate_fn_api_with_chunking. This is critical because the trainer expects a list of micro-batches (chunks) when gradient accumulation is enabled, whereas the original collate_fn_api returned a single batch dict, causing anAssertionError.num_chunksparameter linked togradient_accumulation_steps.Verified
Tested locally. The training process is now stable, and the
ValueErrorregarding invalid numeric entries in the cost matrix is resolved.