Fail to detect CUDA device?
I'm trying to run modelscope on an nvidia card with cuda, but it fails to be found?
RuntimeError: TextToVideoSynthesisPipeline: TextToVideoSynthesis: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I ran into problems with running in WSL2 where nvidia-smi would show me the graphics card output I expected, but when running tensorflow, it couldn't see it. Running in miniconda fixed it. All that to say that I think you are probably running into a dependency issue somewhere in your setup.
I'm not sure miniconda will work in my case, since I need to install into the Blender included python(local) for this add-on: https://github.com/tin2tin/text_to_video
It's possible your version of PyTorch wasn't compiled with CUDA compatibility. To solve this, uninstall PyTorch, fill out the widget here and run the conda or pip command given: https://pytorch.org/get-started/locally/