-
-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow using CUDA libraries installed as Python modules #8013
Comments
Hi @FlorinAndrei, thanks for feedback! This is definitely what we want. I'll start investigating changes required to support these PyPI distributed CUDA Toolkit. |
Still not working, still relying on NVIDIA CUDA suite instead of Python components.
I have already installed it in my Python environment: nvidia-cusolver-cu12 11.4.5.107 Adding an LD_LIBRARY_PATH at the end of bashrc in the user directory can temporarily solve this problem(root\.bashrc): |
Description
JAX, PyTorch and TensorFlow all can use CUDA libraries installed as Python modules. For example, if you install JAX with the pip version of CUDA...
...that will also install the relevant CUDA libraries as Python modules:
Then you can install TensorFlow as well...
...and then PyTorch...
...and they all work perfectly without needing a separate CUDA install.
However, if you install
cupy-cuda11x
and try to run it, it claims it cannot find CUDA. It only works if you give it a completely separate installation of CUDA, in/usr/local
or something.Having CUDA installed via pip is easier and more convenient, and may save some storage.
Additional Information
I'm using Ubuntu 22.04 with an RTX 3090.
The text was updated successfully, but these errors were encountered: