Skip to content

Commit f3a5bb5

Browse files
authored
Update README.MD
1 parent f4180c8 commit f3a5bb5

File tree

1 file changed

+5
-7
lines changed

1 file changed

+5
-7
lines changed

README.MD

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
```markdown
21
# Flash Attention Windows Wheels (Python 3.10)
32

43
Pre-built Windows wheels for [Flash-Attention 2](https://github.com/Dao-AILab/flash-attention) - The state-of-the-art efficient attention implementation for NVIDIA GPUs.
@@ -44,7 +43,7 @@ Note: These wheels are community-maintained and are not officially supported by
4443

4544
## Quick Installation
4645

47-
```bash
46+
```sh
4847
# Simply download the wheel file and install with:
4948
pip install flash_attn-2.7.0.post2-cp310-cp310-win_amd64.whl
5049
```
@@ -92,7 +91,7 @@ except RuntimeError as e:
9291
### Build Steps
9392

9493
1. **Prepare Environment**
95-
```powershell
94+
```sh
9695
# Install build dependencies
9796
pip install ninja packaging
9897

@@ -102,7 +101,7 @@ $env:CUDA_HOME="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4"
102101
```
103102

104103
2. **Build Process**
105-
```powershell
104+
```sh
106105
# Remove existing installation
107106
pip uninstall flash-attn -y
108107

@@ -156,10 +155,9 @@ Distributed under the same license as Flash Attention. See [Flash Attention Lice
156155
## Security
157156

158157
Verify downloaded wheel checksums:
159-
```
158+
```sh
160159
# Generate checksum (Powershell)
161160
Get-FileHash flash_attn-2.7.0.post2-cp310-cp310-win_amd64.whl -Algorithm SHA256
162-
161+
```
163162
# Compare with expected value
164163
15e0c4af6349b66c1003bf8541487636aca0a6ad81d6593d6711409983fd616c flash_attn-2.7.0.post2-cp310-cp310-win_amd64.whl
165-
```

0 commit comments

Comments
 (0)