-
Notifications
You must be signed in to change notification settings - Fork 12
Description
I am trying to run MNIST DEMO on cuda:0 GPU device. It runs on cpu but not on GPU. Has anyone been able to do this?
Traceback (most recent call last):
File "/hdd3/sparse_automap_danyal/synapses-master/MNIST_demo2.py", line 192, in
set_history = train(log_interval, sparse_net, device, train_loader, optimizer, epoch, set_history)
File "/hdd3/sparse_automap_danyal/synapses-master/MNIST_demo2.py", line 111, in train
output = model(data)
File "/home/lfi/anaconda3/envs/danyal_pytorch16/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in call_impl
result = self.forward(*input, **kwargs)
File "/hdd3/sparse_automap_danyal/synapses-master/MNIST_demo2.py", line 52, in forward
x = F.relu(self.set1(x))
File "/home/lfi/anaconda3/envs/danyal_pytorch16/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in call_impl
result = self.forward(*input, **kwargs)
File "/hdd3/sparse_automap_danyal/synapses-master/synapses/SET_layer.py", line 261, in forward
z = scatter_add(k, self.inds_out)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "/home/lfi/anaconda3/envs/danyal_pytorch16/lib/python3.6/site-packages/torch_scatter/scatter.py", line 23, in scatter_add
size[dim] = int(index.max()) + 1
out = torch.zeros(size, dtype=src.dtype, device=src.device)
return out.scatter_add(dim, index, src)
~~~~~~~~~~~~~~~~ <--- HERE
else:
return out.scatter_add(dim, index, src)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!