Skip to content

Commit 7e63d9c

Browse files
authored
Merge pull request #160 from SmallDoges/update-docs
Update installation requirements and streamline process
2 parents 6c3923e + 49c2445 commit 7e63d9c

File tree

2 files changed

+20
-26
lines changed

2 files changed

+20
-26
lines changed

README.md

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -118,32 +118,29 @@ The following table shows the backward pass performance comparison between Flash
118118

119119
## Installation
120120

121-
### Prerequisites
121+
### Requirements
122122

123-
- **Python**: 3.8 or later
124-
- **PyTorch**: 2.0.0 or later
125-
- **CUDA**: 11.8 or later
123+
- **Linux**: Ubuntu 22.04 or later
126124
- **NVIDIA GPU**: Compute Capability 8.0 or higher
127125
- **C++ Compiler**: GCC 7+
126+
- **CUDA**: 11.8 or later
127+
- **Python**: 3.9 or later
128+
- **PyTorch**: 2.5.1 or later
128129

129-
### CUDA Environment Setup
130+
### Install
130131

131-
Ensure your CUDA environment is properly configured:
132+
You can install Flash-DMA via pre-compiled wheels:
132133

133134
```bash
134-
# Check CUDA installation
135-
nvcc --version
136-
137-
# Set CUDA_HOME if needed
138-
export CUDA_HOME=/usr/local/cuda
135+
pip install flash-dmattn --no-build-isolation
139136
```
140137

141-
### Install from Source
138+
Alternatively, you can compile and install from source:
142139

143140
```bash
144141
git clone https://github.com/SmallDoges/flash-dmattn.git
145142
cd flash-dmattn
146-
MAX_JOBS=4 pip install . --no-build-isolation
143+
pip install . --no-build-isolation
147144
```
148145

149146

README_zh.md

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -118,32 +118,29 @@ Flash-DMA 是一个高性能的注意力实现,将 Flash Attention 的内存
118118

119119
## 安装
120120

121-
### 先决条件
121+
### 依赖
122122

123-
- **Python**: 3.8 或更高版本
124-
- **PyTorch**: 2.0.0 或更高版本
125-
- **CUDA**: 11.8 或更高版本
123+
- **Linux**: Ubuntu 22.04 或更高版本
126124
- **NVIDIA GPU**: 计算能力 8.0 或更高
127125
- **C++ 编译器**: GCC 7+
126+
- **CUDA**: 11.8 或更高版本
127+
- **Python**: 3.9 或更高版本
128+
- **PyTorch**: 2.5.1 或更高版本
128129

129-
### CUDA 环境设置
130+
### 安装
130131

131-
确保您的 CUDA 环境已正确配置
132+
您可以通过预编译的轮子安装 Flash-DMA
132133

133134
```bash
134-
# 检查 CUDA 安装
135-
nvcc --version
136-
137-
# 如需要,设置 CUDA_HOME
138-
export CUDA_HOME=/usr/local/cuda
135+
pip install flash-dmattn --no-build-isolation
139136
```
140137

141-
### 从源码安装
138+
或者,您可以从源代码编译和安装:
142139

143140
```bash
144141
git clone https://github.com/SmallDoges/flash-dmattn.git
145142
cd flash-dmattn
146-
MAX_JOBS=4 pip install . --no-build-isolation
143+
pip install . --no-build-isolation
147144
```
148145

149146

0 commit comments

Comments
 (0)