[main][feature][under updating]adapt for offload activation #2145
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This pr is used to adapt for offload activation (a new feature in Megatron-LM, NVIDIA/Megatron-LM#1752).
Offload activation select inputs of specific modules (such as
core_attn
,qkv_linear
,router_fc1
), offloading them to CPU in forward pass and reloading them to GPU in backward pass.When offloading the modules that include weights (
nn.Parameter
), attributes of these weights (such asmain_grad
,grad_added_to_main_grad
) are ripped off by torch. Therefore, this feature needs to modify the basic modules in TE (such asgrouped_linear.py
, 'layernorm_linear.py') to preserve these necessary attributes.Type of change
Changes
Please list the changes introduced in this PR:
offloading_activation
attribute, add support for retrieving theoffload_activation
flag ingrouped_linear.py
,linear.py
, andlayernorm_linear.py
.grad_added_to_main_grad
attribute in forward pass and get it in backward pass ingrouped_linear.py
,linear.py
, andlayernorm_linear.py
.Checklist: