File "E:\conda\lib\site-packages\torch_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\conda\lib\site-packages\torch\autograd_init_.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: Function NumpyOpWrapperBackward returned an invalid gradient at index 0 - expected type TensorOptions(dtype=float, device=cuda:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)) but got TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
the forward part of the loss function is as follows:
def forward(self, preds, gt):
preds_rank = torch_ops.soft_rank(preds.unsqueeze(0)).float()
gt_rank = torch_ops.soft_rank(gt.unsqueeze(0)).float()
Looking forward to your reply.
File "E:\conda\lib\site-packages\torch_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\conda\lib\site-packages\torch\autograd_init_.py", line 154, in backward
Variable._execution_engine.run_backward(
RuntimeError: Function NumpyOpWrapperBackward returned an invalid gradient at index 0 - expected type TensorOptions(dtype=float, device=cuda:0, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt)) but got TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))
the forward part of the loss function is as follows:
def forward(self, preds, gt):
preds_rank = torch_ops.soft_rank(preds.unsqueeze(0)).float()
gt_rank = torch_ops.soft_rank(gt.unsqueeze(0)).float()
Looking forward to your reply.