forked from Dao-AILab/flash-attention
-
Notifications
You must be signed in to change notification settings - Fork 26
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
30 changed files
with
309 additions
and
55 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -65,14 +65,6 @@ ENV PIP_NO_CACHE_DIR=1 | |
# # apex and pytorch-fast-transformers take a while to compile so we install them first | ||
# TD [2022-04-28] apex is already installed. In case we need a newer commit: | ||
# RUN pip install --upgrade --force-reinstall --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_multihead_attn" --global-option="--fmha" --global-option="--fast_layer_norm" --global-option="--xentropy" git+https://github.com/NVIDIA/apex.git#egg=apex | ||
# TD [2021-10-28] pytorch-fast-transformers doesn't have a wheel compatible with CUDA 11.3 and Pytorch 1.10 | ||
# So we install from source, and change compiler flag -arch=compute_60 -> -arch=compute_70 for V100 | ||
# RUN pip install pytorch-fast-transformers==0.4.0 | ||
# RUN pip install git+git://github.com/idiap/[email protected] # doesn't work on V100 | ||
RUN git clone https://github.com/idiap/fast-transformers \ | ||
&& sed -i 's/\["-arch=compute_60"\]/\["-arch=compute_70"\]/' fast-transformers/setup.py \ | ||
&& pip install fast-transformers/ \ | ||
&& rm -rf fast-transformers | ||
|
||
# xgboost conflicts with deepspeed | ||
RUN pip uninstall -y xgboost && DS_BUILD_UTILS=1 DS_BUILD_FUSED_LAMB=1 pip install deepspeed==0.7.5 | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# @package _global_ | ||
defaults: | ||
- /experiment/owt/gpt2l-hf.yaml | ||
- override /model/gpt2model: gpt2-xlarge | ||
|
||
datamodule: | ||
batch_size: 1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2 changes: 1 addition & 1 deletion
2
training/configs/experiment/pile/gpt3-2.7B-flash-hdim128-rotary-8k.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2 changes: 1 addition & 1 deletion
2
training/configs/experiment/pile/gpt3-2.7B-flash-hdim128-rotary.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
18 changes: 18 additions & 0 deletions
18
training/configs/experiment/pile/gpt3-2.7B-flash-hdim128.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# @package _global_ | ||
defaults: | ||
- /experiment/pile/gpt3xl-flash.yaml | ||
|
||
model: | ||
config: | ||
n_embd: 2560 | ||
n_head: 20 # Headdim 128 is faster than headdim 80 | ||
n_layer: 32 | ||
initializer_range: ${eval:"(2 / (${.n_embd} * 5)) ** 0.5"} | ||
mlp_checkpoint_lvl: 0 | ||
|
||
datamodule: | ||
batch_size: ${eval:"1 if ${train.gpu_mem} < 40 else (2 if ${train.gpu_mem} < 80 else 4)"} | ||
|
||
train: | ||
optimizer: | ||
lr: 1.6e-4 |
2 changes: 1 addition & 1 deletion
2
training/configs/experiment/pile/gpt3-2.7B-flash-rotary-8k.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
# @package _global_ | ||
defaults: | ||
- /experiment/pile/gpt3xl-flash.yaml | ||
|
||
model: | ||
config: | ||
n_embd: 2560 | ||
n_head: 32 | ||
n_layer: 32 | ||
initializer_range: ${eval:"(2 / (${.n_embd} * 5)) ** 0.5"} | ||
mlp_checkpoint_lvl: 0 | ||
|
||
datamodule: | ||
batch_size: ${eval:"1 if ${train.gpu_mem} < 40 else (2 if ${train.gpu_mem} < 80 else 4)"} | ||
|
||
train: | ||
optimizer: | ||
lr: 1.6e-4 |
Oops, something went wrong.