Skip to content

Commit 75850c5

Browse files
committed
Update base for Update on "[ET-VK][Llama] Apply XNNPACK partitoner as well when lowering to Vulkan"
## Context The final logit linear layer in the Transformer architecture has extremely large tensors, since the output and weight tensors will have a tensor with dim equal to the vocabulary size, which may be extremely large. Because of this, image textures cannot be used to execute the op when running with the Vulkan delegate, so an implementation using buffer based tensors must be used. Unfortunately, Vulkan does not have a performant implementation of linear with buffer based tensors at the moment. As a result, if this final linear layer is executed in Vulkan, model inference is extremely slow. ## Changes The below diff will prevent the final logit linear layer from being delegated to Vulkan by enforcing a GPU buffer limit. This diff modifies the export llama script to apply the XNNPACK partitioner after the Vulkan partitioner if lowering to Vulkan, to ensure that remaining ops will be accelerated with XNNPACK. 4 bit quantization will also apply an additional Quantizer after applying the Vulkan quantizer (which will skip the final logit linear layer) so that the final logit linear can be quantized as well. ## Long Term This is a temporary measure while an optimized buffer based linear implementation is developed. Once the Vulkan implementation achieves parity with XNNPACK, the final logit linear will be delegated to Vulkan once more. Differential Revision: [D65899827](https://our.internmc.facebook.com/intern/diff/D65899827/) [ghstack-poisoned]
1 parent c4a9de5 commit 75850c5

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

backends/vulkan/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ def is_tensor_node(node: torch.fx.Node) -> bool:
8787
ImageExtents = Tuple[int, int, int]
8888

8989
DEFAULT_TEXTURE_LIMITS = (16384, 16384, 2048)
90-
DEFAULT_BUFFER_LIMIT = 134217728
90+
DEFAULT_BUFFER_LIMIT = 128 * (1024 * 1024)
9191

9292

9393
class PackedDim(IntEnum):

0 commit comments

Comments
 (0)