File tree Expand file tree Collapse file tree 1 file changed +2
-1
lines changed
Expand file tree Collapse file tree 1 file changed +2
-1
lines changed Original file line number Diff line number Diff line change @@ -17,10 +17,11 @@ The official and recommended backend server for ExLlamaV3 is [TabbyAPI](https://
1717
1818### ⚠️ Important
1919
20- - ** Qwen3-Next** support is currently experimental and still requires profiling and optimization, so don't expect
20+ - ** Qwen3-Next** support is currently experimental and still requires some optimization, so don't expect
2121 optimal performance just yet. [ Flash Linear Attention] ( https://github.com/fla-org/flash-linear-attention ) is required
2222 and this in turn requires Triton. [ causal-conv1d] ( https://github.com/Dao-AILab/causal-conv1d ) is supported and
2323 recommended but not required.
24+ - ** Qwen3-Next** currently does not support tensor/expert parallelism.
2425
2526## Architecture support
2627
You can’t perform that action at this time.
0 commit comments