Skip to content

Commit

Permalink
do not install flash-attention by default
Browse files Browse the repository at this point in the history
  • Loading branch information
JegernOUTT committed Nov 6, 2023
1 parent c9d388a commit 1bc6689
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
1 change: 1 addition & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ RUN git clone https://github.com/smallcloudai/linguist.git /tmp/linguist \
&& rake build_gem
ENV PATH="${PATH}:/tmp/linguist/bin"

ENV INSTALL_OPTIONAL=TRUE
ENV BUILD_CUDA_EXT=1
ENV GITHUB_ACTIONS=true
ENV TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX"
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from typing import List, Set

setup_package = os.environ.get("SETUP_PACKAGE", None)
install_optional = os.environ.get("INSTALL_OPTIONAL", "TRUE")
install_optional = os.environ.get("INSTALL_OPTIONAL", "FALSE")

# Setting some env variables to force flash-attention build from sources
# We can get rid of them when https://github.com/Dao-AILab/flash-attention/pull/540 is merged
Expand Down

0 comments on commit 1bc6689

Please sign in to comment.