Update prepare_build_environment_windows.sh #1830
Closed
+3
−3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a mistaken path to CUDA 12.2, which is causing compatibility issues:
1) torch only has the following prebuilt wheels for CUDA support:
2) Based on the
torch
version, the following will be bundled for Linux:triton
,mkl
, andsympy
being dependencies for all platforms:CTRANSLATE2 currently uses CUDA 12.2 for which torch doesn't build wheels for!
3) CUDA and cuDNN compatibility is flexible and primarily affects whether static linking is available:
4) HOWEVER, despite this compatibility,
TORCH
presents its own complications:This is outlined here: https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix
5) The problem...
Ctranslate2
currently installs CUDA libraries originating from release 12.2, which results in the following being installed:nvidia-cuda-runtime-cu12==12.2.140
nvidia-cublas-cu12==12.2.5.6
nvidia-cuda-nvcc-cu12==12.2.140
nvidia-cuda-nvrtc-cu12==12.2.140
These do not match any of the compatible libraries that
torch
bundles with its wheels - i.e. that are compatible. Again,torch
is, apparently, very specific about which CUDA libraries it supports. For example, these libraries are even different between CUDA releases 12.4.0 (whichtorch
doesn't support) and 12.4.1 (whichtorch
bundles with all its wheels).6) Solution...
Make
ctranslate2
use the libraries from CUDA 12.4.1 (not 12.4.0 even) to maximize compatibility.7) Additional reasons...
Other libraries can be highly dependent on compatibility with torch and/or CUDA versions. For example:
Xformers Compatibility
pip install https://download.pytorch.org/whl/cu124/xformers-0.0.28.post3-cp311-cp311-win_amd64.whl
LINUX Flash Attention 2 Compatibility
Regarding
triton
, whichtorch
requires...triton==3.0.0
only supports up to Python 3.11.triton==3.1.0
, on the other hand, supports Python 3.12.Conclusion
Using CUDA libraries related to release 12.4.1 maximizes the compatibility with
torch
and other popular libraries likeflash attention 2
,xformers
,triton
(a requirement oftorch
now), not to mention many others.