Skip to content

Commit 03c71b3

Browse files
faran928meta-codesync[bot]
authored andcommitted
Remove no-op operation in sharded tensor pool (#3551)
Summary: Pull Request resolved: #3551 During implementation for item usharding added a no-op operation by mistake. Since item usharding already updates the device map to correctly move all the parameters of corresponding LocalShardPool to cpu / cuda (depending upon where the shard is located), setting device for shard here is not required. Removing this line to avoid confusion Reviewed By: cp2923 Differential Revision: D87244659 fbshipit-source-id: ea2a1d3fcaaba830b1cf6456c3326be2a66554fe
1 parent 5f98e01 commit 03c71b3

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

torchrec/distributed/tensor_pool.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -305,7 +305,6 @@ def __init__(
305305
@torch.jit.export
306306
def set_device(self, device_str: str) -> None:
307307
self.current_device = torch.device(device_str)
308-
self._shard.to(self.current_device)
309308

310309
def forward(self, rank_ids: torch.Tensor) -> torch.Tensor:
311310
"""

0 commit comments

Comments
 (0)