Spaces:
Running
When installing flash-attn: OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Hi,
I'm trying to create a Gradio demo using ZERO, but I'm getting the error OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
I'm using the latest version of PyTorch 2.2.0
. Here are the logs:
Collecting flash-attn
Downloading flash_attn-2.5.2.tar.gz (2.5 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 2.5/2.5 MB 305.4 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
ร python setup.py egg_info did not run successfully.
โ exit code: 1
โฐโ> [20 lines of output]
fatal: not a git repository (or any of the parent directories): .git
/tmp/pip-install-ctfg1z8j/flash-attn_535dcb5142ad4e24891437875a885bb7/setup.py:78: UserWarning: flash_attn was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.
warnings.warn(
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-ctfg1z8j/flash-attn_535dcb5142ad4e24891437875a885bb7/setup.py", line 133, in <module>
CUDAExtension(
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1074, in CUDAExtension
library_dirs += library_paths(cuda=True)
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1201, in library_paths
if (not os.path.exists(_join_cuda_home(lib_dir)) and
File "/home/user/.local/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2407, in _join_cuda_home
raise OSError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
torch.__version__ = 2.2.0+cu121
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Thanks!
As far as I understand, the GPU is plugged in during the execution of a certain piece of code, and the CPU without the card is running when building and installing the space.
Yeah, pretty sure that's what it does, but in my experience it installs the CUDA version of PyTorch on CPU
export CUDA_HOME=/usr/local/cuda-X.X
You can try setting environment variables manually
It may be helpful to build the space on GPU, that way issues like this one don't happen. Not sure if it's viable though because of expenses.
Will it still be compatible with ZERO?
Hi, thanks for reporting this
Indeed, flash-attn
is not well supported on ZeroGPU currently
Here is an easy (but ugly) workaround to fix it though: https://huggingface.co/spaces/HuggingFaceM4/screenshot2html/blob/main/app.py#L19
(you must also remove flash-attn
from your (pre-)requirements.txt
of course)
At some point FLASH_ATTENTION_SKIP_CUDA_BUILD
will probably be set by default on the build phase so having flash-attn in the requirements simply work
Let me know if it worked for you!
I'll try that, thanks!