Skip to content

Commit 8098d61

Browse files
committed
Update README.md
1 parent 4c23cb9 commit 8098d61

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,11 @@ The official and recommended backend server for ExLlamaV3 is [TabbyAPI](https://
1717

1818
### ⚠️ Important
1919

20-
- **Qwen3-Next** support is currently experimental and still requires profiling and optimization, so don't expect
20+
- **Qwen3-Next** support is currently experimental and still requires some optimization, so don't expect
2121
optimal performance just yet. [Flash Linear Attention](https://github.com/fla-org/flash-linear-attention) is required
2222
and this in turn requires Triton. [causal-conv1d](https://github.com/Dao-AILab/causal-conv1d) is supported and
2323
recommended but not required.
24+
- **Qwen3-Next** currently does not support tensor/expert parallelism.
2425

2526
## Architecture support
2627

0 commit comments

Comments
 (0)