Skip to content

Commit 5d71c9b

Browse files
authored
[ET-VK] Add redirect for backends-vulkan (pytorch#15305)
Summary: Title says it all! cc @manuelcandales @digantdesai @cbilgin
1 parent 0dcf42e commit 5d71c9b

File tree

4 files changed

+117
-3
lines changed

4 files changed

+117
-3
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ executorch
3333
│ ├── <a href="backends/openvino">openvino</a> - OpenVINO backend for Intel hardware.
3434
│ ├── <a href="backends/qualcomm">qualcomm</a> - Qualcomm-specific backends. See <a href="docs/source/backends-qualcomm.md">doc</a>.
3535
│ ├── <a href="backends/transforms">transforms</a> - Transformations for backend optimization.
36-
│ ├── <a href="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <a href="docs/source/backends-vulkan.md">doc</a>.
36+
│ ├── <a href="backends/vulkan">vulkan</a> - Vulkan backend for cross-platform GPU support. See <a href="docs/source/backends/vulkan/vulkan-overview.md">doc</a>.
3737
│ └── <a href="backends/xnnpack">xnnpack</a> - XNNPACK backend for optimized neural network operations. See <a href="docs/source/backends/xnnpack/xnnpack-overview.md">doc</a>.
3838
├── <a href="codegen">codegen</a> - Tooling to autogenerate bindings between kernels and the runtime.
3939
├── <a href="configurations">configurations</a> - Configuration files.
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
Namespace,Operator,Notes
2+
aten,_log_softmax,
3+
aten,_native_batch_norm_legit_no_training,
4+
aten,_softmax,
5+
aten,_to_copy,dtype conversion between float types only
6+
aten,_weight_int8pack_mm,
7+
aten,abs,
8+
aten,add,
9+
aten,addmm,
10+
aten,amax,keepdim=True required; max 2D reductions
11+
aten,amin,keepdim=True required; max 2D reductions
12+
aten,arange,
13+
aten,avg_pool2d,
14+
aten,bmm,
15+
aten,cat,
16+
aten,clamp,
17+
aten,clone,
18+
aten,constant_pad_nd,
19+
aten,convolution,batch=1 for 2D conv; no transposed 1D conv; no 3D conv
20+
aten,cos,
21+
aten,div,
22+
aten,div.Tensor_mode,
23+
aten,embedding,
24+
aten,eq,
25+
aten,exp,
26+
aten,expand_copy,no resize support
27+
aten,flip,
28+
aten,full,
29+
aten,full_like,
30+
aten,ge,
31+
aten,gelu,
32+
aten,gt,
33+
aten,hardshrink,
34+
aten,hardtanh,
35+
aten,index_select,
36+
aten,le,
37+
aten,leaky_relu,
38+
aten,linear,
39+
aten,lt,
40+
aten,max_pool2d,
41+
aten,max_pool2d_with_indices,
42+
aten,mean,keepdim=True required; max 2D reductions
43+
aten,minimum,
44+
aten,mm,
45+
aten,native_group_norm,
46+
aten,native_layer_norm,resize supported
47+
aten,neg,
48+
aten,ones,
49+
aten,ones_like,
50+
aten,permute,
51+
aten,permute_copy,
52+
aten,pow,
53+
aten,relu,
54+
aten,repeat,
55+
aten,round,
56+
aten,rsqrt,
57+
aten,scalar_tensor,
58+
aten,select_copy,
59+
aten,sigmoid,
60+
aten,sin,
61+
aten,slice_copy,
62+
aten,split,
63+
aten,split_with_sizes_copy,
64+
aten,sqrt,
65+
aten,squeeze_copy,
66+
aten,sub,
67+
aten,sum,keepdim=True required; max 2D reductions
68+
aten,t_copy,
69+
aten,tanh,
70+
aten,unsqueeze_copy,
71+
aten,upsample_bilinear2d,
72+
aten,upsample_nearest2d,
73+
aten,view_copy,
74+
aten,zeros,
75+
aten,zeros_like,
76+
aten,_assert_scalar,removed via graph pass
77+
aten,sym_constrain_range_for_size,removed via graph pass
78+
aten,sym_size,
79+
dim_order_ops,_clone_dim_order,no dtype conversion; removable if no dtype change
80+
dim_order_ops,_to_dim_order_copy,no dtype conversion; removable if no dtype change
81+
llama,custom_sdpa,
82+
llama,sdpa_with_kv_cache,
83+
llama,update_cache,
84+
operator,add,
85+
operator,eq,
86+
operator,ge,
87+
operator,getitem,
88+
operator,gt,
89+
operator,le,
90+
operator,lt,
91+
quantized_decomposed,choose_qparams,
92+
quantized_decomposed,choose_qparams_per_token_asymmetric,
93+
quantized_decomposed,dequantize_per_channel,
94+
quantized_decomposed,dequantize_per_tensor,
95+
quantized_decomposed,dequantize_per_token,
96+
quantized_decomposed,quantize_per_channel,
97+
quantized_decomposed,quantize_per_tensor,
98+
quantized_decomposed,quantize_per_token,
99+
torchao,choose_qparams_affine,
100+
torchao,dequantize_affine,
101+
torchao,quantize_affine,
102+
et_vk,add_q8ta_q8ta_q8to,no resize support
103+
et_vk,apply_rotary_emb,
104+
et_vk,conv2d_q8ta_q8csw_q8to,no resize support
105+
et_vk,conv2d_q8ta_q8csw_q8to_dw,no resize support
106+
et_vk,conv_with_clamp,batch=1 for 2D conv; no transposed 1D conv
107+
et_vk,dequantize_q8to_from_conv2d,no resize support
108+
et_vk,grid_priors,
109+
et_vk,linear_dq8ca_q4gsw,
110+
et_vk,linear_q4gsw,
111+
et_vk,linear_q8ta_q8csw,
112+
et_vk,linear_qcs4w,
113+
et_vk,quantize_q8ta_for_conv2d,no resize support

docs/source/backends/vulkan/vulkan-op-support.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ All operators support dynamic input shapes unless otherwise noted (i.e. "no
3939
resize support"). The expectation is that over time, all operators will be able
4040
to support dynamic shapes.
4141

42-
.. csv-table:: Operator Support
42+
.. csv-table:: Vulkan Backend Operator Support
4343
:file: vulkan-op-support-table.csv
4444
:header-rows: 1
4545
:widths: 25 25 75

docs/source/conf.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,8 @@
264264
"export-overview": "using-executorch-export.html",
265265
"runtime-build-and-cross-compilation": "using-executorch-building-from-source.html",
266266
"tutorials/export-to-executorch-tutorial": "../using-executorch-export.html",
267-
"build-run-vulkan": "backends-vulkan.html",
267+
"build-run-vulkan": "backends/vulkan/vulkan-overview.html",
268+
"backends-vulkan": "backends/vulkan/vulkan-overview.html",
268269
"executorch-arm-delegate-tutorial": "backends-arm-ethos-u.html",
269270
"build-run-coreml": "backends-coreml.html",
270271
"build-run-mediatek-backend": "backends-mediatek.html",

0 commit comments

Comments
 (0)