Skip to content

Commit 2706020

Browse files
authored
fix: image build failure due to flash attn (#629)
* Add free up disk space to gh runners Signed-off-by: Dushyant Behl <[email protected]> * Build flash attn without cache to avoid problems Signed-off-by: Dushyant Behl <[email protected]> --------- Signed-off-by: Dushyant Behl <[email protected]>
1 parent faea400 commit 2706020

File tree

2 files changed

+4
-1
lines changed

2 files changed

+4
-1
lines changed

.github/workflows/coverage.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ jobs:
1010
runs-on: ubuntu-latest
1111
steps:
1212
- uses: actions/checkout@v4
13+
- name: "Free up disk space"
14+
uses: ./.github/actions/free-up-disk-space
1315
- name: Set up Python 3.12
1416
uses: actions/setup-python@v4
1517
with:

build/Dockerfile

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,9 +156,10 @@ ENV PIP_NO_BINARY=mamba-ssm,mamba_ssm
156156
RUN --mount=type=cache,target=/home/${USER}/.cache/pip,uid=${USER_UID} \
157157
python -m pip install --user wheel && \
158158
python -m pip install --user "$(head bdist_name)" && \
159-
python -m pip install --user "$(head bdist_name)[flash-attn]" && \
160159
python -m pip install --user --no-build-isolation "$(head bdist_name)[mamba]"
161160

161+
RUN python -m pip install --user --no-build-isolation "$(head bdist_name)[flash-attn]"
162+
162163
# fms_acceleration_peft = PEFT-training, e.g., 4bit QLoRA
163164
# fms_acceleration_foak = Fused LoRA and triton kernels
164165
# fms_acceleration_aadp = Padding-Free Flash Attention Computation

0 commit comments

Comments
 (0)