Add TRANSFORMER warmup policy for learning rate scheduling #3548
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
This diff implements the TRANSFORMER warmup policy from "Attention is All You Need" (Vaswani et al., 2017) for learning rate scheduling in torchrec, and updates a model configuration to use it.
Implementation
Added
WarmupPolicy.TRANSFORMERtofbcode/torchrec/optim/warmup.pywhich implements the formula:This schedule provides:
warm_stepswarm_stepsThe
max_itersparameter serves aswarm_stepsin the formula. The schedule converges at step = warm_steps where both terms in the min() function become equal.Testing
Added comprehensive unit tests in
fbcode/torchrec/optim/tests/test_warmup.py:none_throws()frompyre_extensionsfor type-safe Optional handlingUpdated
fbcode/torchrec/optim/tests/BUCKto includepyre-extensionsdependency.Configuration Update
Updated
fbcode/minimal_viable_ai/models/gysj/gysj_esr_roo/conf/model_roo_config.pyto use TRANSFORMER warmup:Differential Revision: D87127589