I am using Kitten TTS on a system with an NVIDIA GeForce RTX 3060 Ti, but the model runs on CPU instead of GPU.
System Info
OS: Ubuntu
GPU: RTX 3060 Ti
Python: 3.12.2
Checks
import torch
print(torch.cuda.is_available())
GPU is detected (nvidia-smi works), but model still runs on CPU.
Expected
Model should use CUDA (GPU).
Request
Is GPU inference supported? If yes, how can I enable it?
I am using Kitten TTS on a system with an NVIDIA GeForce RTX 3060 Ti, but the model runs on CPU instead of GPU.
System Info
OS: Ubuntu
GPU: RTX 3060 Ti
Python: 3.12.2
Checks
import torch
print(torch.cuda.is_available())
GPU is detected (nvidia-smi works), but model still runs on CPU.
Expected
Model should use CUDA (GPU).
Request
Is GPU inference supported? If yes, how can I enable it?